Conscium
An editor has nominated this article for deletion. You are welcome to participate in the deletion discussion, which will decide whether or not to retain it. |
Industry | Artificial intelligence, AI safety |
---|---|
Founded | 2024 |
Founders | Daniel Hulme, Ed Charvet, Calum Chace, Ted Lappas, Panagiotis Repoussis |
Website | conscium.ai |
Conscium is a London-based artificial intelligence (AI) safety company founded in 2024. It focuses on AI agent verification, neuromorphic computing development, and research into artificial consciousness.
Workstreams
[edit]AI agent verification
[edit]Conscium verifies AI agents developed by third parties to ensure they act in ways consistent with intended designs and purposes.[1] The company emphasizes the need for trustworthy and predictable AI behavior given the anticipated widespread deployment of autonomous systems.
Neuromorphic systems development
[edit]Conscium is engaged in the development of neuromorphic computing technologies, aiming to create systems that process information in ways similar to biological brains. These systems are designed to be more adaptive, scalable, and energy-efficient than traditional AI architectures.[2]
Research into artificial conscious systems
[edit]The company's research into artificial consciousness is led by Mark Solms, Chair of Neuropsychology at the University of Cape Town.[3] The research investigates the potential for machines to develop conscious experiences and explores the ethical and moral implications if such systems were to emerge.
History and team
[edit]Conscium was founded by Daniel Hulme, a British businessman and academic specializing in AI, along with Ed Charvet, Calum Chace, Ted Lappas, and Panagiotis Repoussis.[4]
The company's advisory board includes neuroscientists and computer scientists such as Anil Seth, Karl Friston, Anthony Finkelstein, Benjamin Rosman, David Wood, Jonathan Shock, Megan Peters, Moran Cerf, Nicholas Humphrey, Nicola Clayton, Nikola Kasabov, Steve Furber, and Suzanne Livingston.[4]
Conscium is creating a neuromorphic computing laboratory to support its research into machine consciousness.
Research
[edit]In January 2025, Conscium, in collaboration with the University of Oxford's Global Priorities Institute, published a paper titled Principles for Responsible AI Consciousness Research.[5] The paper urges caution and ethical consideration in experiments that could involve the creation of conscious artificial systems.
Conscium has also been cited in broader discussions about the future risks of sentient AI, including coverage in major outlets such as The Guardian and Nature, and is considered a leading organization advocating for careful governance of AI consciousness research.[3][1]
References
[edit]- ^ a b AI systems could be ‘caused to suffer’ if consciousness achieved, says research – The Guardian
- ^ We should do more than fret over AI’s feelings – Financial Times
- ^ a b What should we do if AI becomes conscious? – Nature
- ^ a b Conscium About Us
- ^ Principles for Responsible AI Consciousness Research – Journal of AI Research