D r. H e c t o r Z e n i l


BSc Math (UNAM), PGCert Nanotech (Oxon), MPhil Logic (Paris 1/ENS), PhD CompSci (Lille), PhD Phil (Sorbonne).

Associate Professor / Senior Lecturer
Research Departments of Biomedical Computing & Imaging Physics and Engineering
School of Biomedical Engineering & Imaging Sciences
Faculty of Life Sciences & Medicine & King’s Institute for Artificial Intelligence
King’s College London

Office No. 1, 5th Floor, Becket House, SE1 7EU, St. Thomas Campus, London

Academic Entrepreneur in Residence @ London Institute for Healthcare Engineering
Independent AI Scientific Advisor @ The Alan Turing Institute
Founder and Director @ Oxford Immune Algorithmics
Board Trustee @ The British Society for Research Ageing
Elected member @ The London Mathematical Society
Fellow @ British Royal Society of Medicine
Member @ Canadian College of Health Leaders
Founding member @ EPSRC Future Blood Network+

Media interview at Westminster London during the St James House’s Parliament History Project on the 75th anniversary of the NHS. September 2023

My most recent books

World Scientific Press/Imperial College (2012) with a foreword by Sir Roger Penrose, Springer Nature (2022) and Cambridge University Press (2023)
All available on Amazon

Methods and Applications of Algorithmic Complexity offers an alternative approach to algorithmic complexity rooted in and motivated by the theory of algorithmic probability.  It explores the relaxation of the necessary and sufficient conditions that make for the numerical applicability of algorithmic complexity better rooted in the true first principles of the theory and what distinguishes it from computable or statistical measures, including those based on (lossless) compression schemes, such as LZW and cognates, that are in truth more related to traditional Shannon entropy and demonstrably inadequate to characterising causal mechanistic principles. 

Algorithmic Information Dynamics (AID), is a type of digital calculus for causal discovery and causal analysis in software space, the result of combining perturbation and counterfactual analysis with algorithmic information theory. It is the means to navigate software space, like in the Matrix movie. Watch the video below for more information or read the Scholarpedia page on AID for more technical details.

Introduced the field of Algorithmic Information Dynamics

As Featured in Nature

Nature, the world’s top journal in science, produced a video (by their own initiative and not paid for) to explain our research based on algorithmic probability, an Artificial General Intelligence approach to causal discovery, after our article on causal deconvolution, ​a type of AGI able to understand cause & effect.

  • Reddit and PLOS Science invited me to an Ask Me Anything session​ that was in the top 10 all time Reddit/PLOS Science ​AMAs in 2018

  • Quanta produced a podcast to explain our research (published by the Royal Society) on the computational basis of the emergence genes

  • I was invited to write an essay to imagine how Computing will look like in the year 2065 published by Springer Nature (ed. Prof. A. Adamatzky): Reprogramming Matter, Life, and Purpose.

Short Bio

Dr. Hector Zenil currently serves as director and advisor for several company and trust boards in healthcare in the UK, and Sweden. He is the sole founder and Chief Visionary Officer of Oxford Immune Algorithmics, an AI spin-out from the University of Oxford, and was the CEO for its first five years. He is a mentor at the CDL Oxford Saïd Business School.

In the last ten years, he has been associated with so-called Golden Triangle universities in the UK, affiliated with Oxford and Cambridge, as a faculty member and senior researcher, and, more recently, as an Associate Professor/Senior Lecturer in Biomedical Engineering at the Faculty of Life Sciences and Medicine at King’s College London. Before that, he was an Assistant Professor and Lab leader at the Algorithmic Dynamics Lab, Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute (the institution that awards the Nobel Prize in Physiology or Medicine) and SciLifeLab.

He was a senior researcher and policy advisor at The Alan Turing Institute, the U.K. National Institute for Data Science and AI, financially supported by the Office of Naval Research (U.S. Department of Defense). He remains affiliated with the Turing in an official capacity as one of only nine independent AI scientific advisors to the Turing Institute.

He has pioneered and championed for the last 20 years the transformation of one of the most abstract and theoretical subjects in mathematics related to uncomputability and undecidability to make of algorithmic information an empirical science beyond Shannon Entropy and other limited approaches based upon statistical lossless compression algorithms moving towards mechanical causality.

As a result, he introduced the field of Algorithmic Information Dynamics (AID), a new field devoted to the study of causality in dynamical systems, in particular living systems, in what they called ‘software space’ or the space of all possible computable explainable models, using a form of Artificial General Intelligence based upon algorithmic probability for optimal inference that consists in a form of the largest language model of models, a generalisation of current predictive and generative AI.

He is the author of over 130 peer-reviewed papers in some of the highest-impact journals in multiple disciplines, such as Nature and the Royal Society, and about eight books, two of which have been published in the last years by Springer Nature and Cambridge University Press and one with a foreword by Sir Roger Penrose (Nobel Prize in Physics 2020).

In the U.S., he was a visiting scholar at Carnegie Mellon and a member of the NASA Payload team for the Mars Biosatellite project at MIT, where he was in charge of developing the tracking software to study the effects of microgravity on small living organisms

Also in the U.S., as part of the original Wolfram|Alpha team of about five people (today around 300), based in Boston, working directly with Stephen Wolfram under his Special Project group, he contributed to computational linguistics’ code behind the A.I. engines of Apple’s Siri, and Amazon’s Alexa that enable these systems to understand and answer contextual factual questions later also introduced as the first official Retrieval-augmented generation plugin approved by OpenAI to reduce GPT 4 hallucinations.

He became the Managing Editor of Complex Systems in 2017, the first journal in the field of complexity founded by Stephen Wolfram in 1987.  He serves as Associate Editor for several journals, including Entropy, Information, Frontiers in Artificial Intelligence, PLoS Complex Systems, and Complexity, and for book series such as Nature Springer’s on Complexity.

He holds two PhDs, one from Lille 1 France in Computer Science (highest honours) and one from the Sorbonne Paris 1 in Logic and Epistemology (highest honours), a Master’s in Logic and Philosophy from the ENS/Paris 1 (IHPST) and a first-year Master’s (PGCert) degree in Nanotechnology from Oxford University.

Gregory Chaitin, one of the founding fathers of modern computer science and complexity theory, described Dr. Zenil in his Ph.D. thesis committee written report as a “new kind of practical theoretician“ and later praised his research in the context of an opinion of Marvin Minsky.

He has been awarded grants from the Swedish Research Council (VR), the Foundational Questions Institute (FQXi), and the Silicon Valley Foundation and won several awards from The Kroto Institute, University of Sheffield (Behavioural and Evolutionary Theory Lab), the UK Department of International Trade for representing the future of healthcare in the UK at the Dubai Expo 2020 to winning the Etihad AI first place prize against participants such as Microsoft and Accenture.

He was given French citizenship by President Nicolas Sarkozy under his policy of academic excellence in 2011  and he also holds Mexican and British nationalities.

He is the 2024 recipient of the Charles François Prize awarded by the International Academy for Systems and Cybernetic Sciences at the World Conference on Complex Systems.

A formal approach to the Semantics of SETI and techno signature detection

For too long scientists and thinkers have made rather simplistic assumptions about how zero-knowledge messaging could be interpreted or deciphered when exchanging information between intelligent beings with no common language. From the Arecibo message to the Voyager’s discs. How can we speak to diverse minds on Earth and beyond? The same rules can help understand immune cells that have produced their own cytokine-based grammar to understand threats and diseases to intelligent animal species that are capable of sophisticated individual and social language.

Because of its extreme simplicity but high expressivity at demonstrating a key point in causality and complexity, my proudest concept in a piece of computer code I have written to this date is this highly-nested recursive function written in the Wolfram Language running in Mathematica while travelling on a train after attending a Nobel Prize event in Gothenburg back to Stockholm in about an hour while discussing a new paper idea to be written with my friend and colleague Dr. Narsis Kiani. The paper was published as part of a paper of ours in the peer-reviewed journal Physical Review E published by the American Physical Society:

The single line of code shows how a super-compact fully deterministic algorithm can fool any statistical inspection including Shannon Entropy-based methods (comprising popular compression algorithms like LZW usually abused in complexity science) believing that the resulting object may be random to an uninformed observer.

The resulting evolving object is always a directed connected graph that can grow to any size with Entropy-divergent properties.  Its degree sequences grow with (almost) maximal entropy rate (from which it can be fully reconstructed) but its adjacency matrix (from which it can also be fully reconstructed) grows with (almost) lowest entropy values. 

ZK[graph_] := EdgeAdd[graph,Rule @@@ Distribute[{Max[VertexDegree[graph]] + 1, Table[i, {i, (Max[VertexDegree[graph]] + 2), (Max[VertexDegree[graph]] + 1) +(Max[VertexDegree[graph]] + 1) – VertexDegree[graph, Max[VertexDegree[graph]] + 1]}]}, List]]

With its generating code NestList[ZK, Graph[{1 -> 2}], n] where n is the number of iterations determining the graph size and 1->2 is a 2-node single-edge graph to begin with.  The algorithm keeps adding nodes and edges to keep the connected graph growing and diverging with an apparent maximal entropy degree sequence.

This is one of the foundations to understanding our research, based on underlying possible generating mechanisms beyond statistical methods that have dominated science for decades or centuries and can go beyond, or combine, methods from statistical mechanics but is based and motivated in the principles of algorithmic probability, a powerful form of Artificial General Intelligence.

The most important discovery in science according to Minsky

Marvin Minsky, widely considered the founding father of Artificial Intelligence, made the following astonishing claim related to algorithmic complexity and algorithmic probability describing what turns out to be exactly the description of my own research in a closing statement months before passing away:

“It seems to me that the most important discovery since Gödel was the discovery by Chaitin, Solomonoff and Kolmogorov of the concept called Algorithmic Probability which is a fundamental new theory of how to make predictions given a collection of experiences and this is a beautiful theory, everybody should learn it, but it’s got one problem, that is, that you cannot actually calculate what this theory predicts because it is too hard, it requires an infinite amount of work. However, it should be possible to make practical approximations to the Chaitin, Kolmogorov, Solomonoff theory that would make better predictions than anything we have today. Everybody should learn all about that and spend the rest of their lives working on it.

The Pervasiveness of Universal Computation

I got interested in neural networks from the standpoint of computability and complexity theories in my early 20s when I was writing my final year memoir for my BSc degree in math​​ at UNAM. Today, I am helping revolutionise the field by reintroducing the theories of computability and algorithmic complexity back into AI and neural networks.

My current research consists of helping machine and deep learning see beyond these statistical patterns in more clever ways than simple pattern matching. By introducing algorithmic probability to Artificial Intelligence I help the field to reincorporate abstract reasoning and causation in current AI trends.

Known to underperform in tasks requiring abstraction and logical inference, current approaches in deep and machine learning are very limited. An example of our research in this direction is our paper published in Nature Machine Intelligence which can be read here for free (no paywall).

These CAs that we have proven to be Turing-universal using novel methods, and thus are able to run any computable function, are the result of combining the power of extremely simple computer programs (ECAs):

Composition of ECA rules 50 - 37 with colour remapping leading to a 4-colour Turing universal CA emulating rule 110.

Composition of ECA rules 170 - 15 - 118 with colour re-mapping mapping leading to a 4-colour Turing universal CA emulating rule 110.

How a convolutional deep neural network trained with a large set of fine art paintings ‘sees’ me.

As reported in our paper published in the journal of Cellular Automata, we proved that these two 4-colour cellular automata are Turing universal, found by exploration of rule composition. This means that these CAs can, in principle, run MS Windows and any other piece of software (even if very inefficiently).

These new CAs helped us show how the Boolean composition of two and three ECA rules can emulate rule 110.

This also means that these new CAs can be decomposed into simpler rules and thus illustrates the process of causal composition and decomposition.

The methods also constitute a form of sophisticated causal coarse-graining learning that we have explored in other papers such as this one. In the same paper, we also introduced a minimal set of ECA rules that can generate all others by Boolean composition.

In this other paper, we also found strong evidence of pervasive universal computation in software space

Other authored and edited books

‘Lo que cabe en el espacio’ a short book I prepared right after my BSc degree, is available for Kindle and for free in mobi and pdf.

As contributor