Background
Raymond Kurzweil believes computers will soon be able to outperform humans on any cognitive task, that computers will surpass human level general intelligence. He’s not delusional. Computers already surpass us in math. Computers already have a more reliable memory. Computers talk to us from mobile devices. On some media outlets, computers write the news. A computer managed to fool us to think it is a 13 year old boy, passing the Turing test. It certainly seems like computers are becoming more and more like us.
Kurzweil based his prediction on the development rate in computer architectures (i.e. Moore’s law) and in computer sciences. His assumption is that machine general intelligence will be a computer program, and so all we need is strong enough hardware and software. Still, will that suffice? To clarify, in most other areas of computer science, the theories came well before the praxis. We had the Turing machine model, well before we had personal computers. We found the laws of electronics, well before built a transistor. We had Boolean logic, well before we had CPUs. When computers first emerged, be it a slide rule (a hand operated analog computer developed in the 17th century), a tide predicting machine (a system of pulleys from 1872), etc. they were nothing like a human mind. We built whatever works. We built tools.
General intelligence might be a slightly different beast. Machine general intelligence is mimetic. We discover its models it by reflecting on our own cognitive capabilities, breaking them to their components, so as to approximate human cognitive capacities. We’re attempting to recreate these cognitive components in computers, be it machine learning, natural language processing, or perception, but it’s not like we know modern computers should be good at it. We know the wetware in our skulls is better at it, uses less energy and space to do the same thing, but we simply don’t have any other choice. Moreover, even the math we use in AI development is mimetic. For example, at its core, neural networks, which laid the foundation to modern machine learning, is an attempt to recreate brain cell formations (i.e neurons) in a mathematical model. And less we forget the Turing test. The very success criterion for creating machine general intelligence, is to build a machine, with whom interactions would be indistinguishable from those we have with humans. So while we can call this computer science, it’s is all very untypical.
As much as we want to dismiss these differences as anecdotal, they might actually be fundamental. To clarify, general intelligence should be the product of evolution, and in evolution, patterns emerge before purpose. First mutations emerge, and only then they become advantageous. When such mutations appear, they should are limited to but a few genes, while most of the DNA remains intact. Nature, and more specifically entropy, cannot afford gazillion lines of code to implement a feature. There needs to be a simple model behind any innovation nature selects. Like a fractal, this model should spread throughout the organism. It’s highly improbable several different mutations emerge simultaneously, and work together in harmony. With respect to general intelligence, there’s no way it emerged in a completely unintelligent organism. Most of its wetware should have existed prior, but lacked its special sauce. To summarize, general intelligence should have a relatively simple model at its core.
This model should be to general intelligence, as a Turing machine is to modern computers. It should be the abstract at its root, a concept we could easily summarize on a piece of paper. If it’s too complex or ambiguous, it’s probably wrong. If the model needs to be “special-cased” for numerous or ambiguous requirements, it’s probably the wrong approach. That is not to say building machine general intelligence should be easy, but once we understood it, it should be relatively simple to explain.
None of these arguments are rigorous, and as far as I know, we haven’t discovered this "simple model for natural general intelligence”. Still, I came across an idea, which might be useful. Even if it’s wrong, incomplete, or irrelevant, it might get your thoughts rolling. I call it “inverted functional programming”. That’s a bit of a long phrase, so going forward I’ll refer to it as IFP, for short.
Principles
Broadly speaking, IFP is a similar to imperative and procedural programming, with a few unique nuances. It describes how computational agents operate, if they invert all the principles and themes of functional programming. To understand what that means, it is best to simply review them.
Note that going forward, I will use the terms general intelligence and natural thought interchangeably.
Functional programming is inspired by lambda calculus
Similarly to as in imperative programming, in IFP, functions mutate the global state, rather than map values to other values. Still, unlike imperative programming, in IFP, functions always access the global state in its entirety. In abstract mathematical jargon, we could say an IFP function always take the form of
f(GSt) = GSt+1
Where GSt is the global state before applying the function, and GSt+1 is the state after applying the function. All functions are applicable to any global state (although some functions will result with an inadequate global state). Still, the formula above is inexact, because it suggests a functional approach. Instead, it would be best to think of IFP functions as the signature
void f()
where all data is passed and returned through global access. As such, lambda calculus is useless in IFP. With respect to natural thought, the global state is the content of our minds in a given time. By thinking, we update and change our minds. Regarding “inadequate global state”, well, that’s just being stupid.
Functions have no side effects
Side effects are state transitions that occur outside of the function’s stack or scope. In IFP, functions cause nothing but side effects, and again, IFP functions don’t ever return anything. With respect to human intelligence, we don’t have a “stack”. Each thought continues from where the last thought ended. Any previous mental state is gone, and so there is nothing to return values to. Note this is somewhat similar to imperative programming.
Functional programming is declarative
In IFP, we both can’t and don’t predict the effects of applying a function. When we think, we can’t and don’t predict the result of putting our mind to a task. We might solve the task, or we might find something else to focus on. Note this slightly differs from imperative programming. When done right, imperative programming should be predictable.
Invoking functions with identical parameters returns the same result
In IFP, a function will cause different changes each time it is invoked. Similarly to as in object oriented programming, IFP functions have an internal state, and so even if the environment is identical, the result can be different. In natural thought, the meaning of the concepts with which we think changes every time we approach them. The amount of attention an action requires reduces with each repetition. Naturally, this is mostly transparent to us, because we seldom monitor the ways in which our minds change.
Functional programming uses higher order functions
Following the principle above, functions don’t return anything, and so cannot return functions. Moreover, we never bind a function to a variable, because we never pass variables to a function. With respect to natural thought, all thoughts exist in the current mental space. We don’t “store” higher order thoughts and use them when necessary. We think our way into different perspectives, by applying thoughts that shift our attention to different abstraction layers.
Functional programming uses recursion
As stated above, each invocation of a function changes its internal state, and returns an unexpected result, and so while a function can call itself, we can’t predict its outcome, or predict if a recursion will persist or stop. With respect to natural thought, we take algorithms as mere suggestions for future lines of thought. Again, it’s possible we lose interest, and defer or “terminate” without reaching resolution.
Functional programming can use strict (eager) or non-strict (lazy) evaluation
In IFP, functions have no parameters. Instead they routinely access the global state, and have a cross invocation internal state. With respect to natural thought, it is as if thoughts emerge out our minds. We don’t inject signals into our thoughts, but rather, apply thought patterns within our mental space.
Functional programming uses type systems to prevent errors
In IFP, functions have neither arguments or return values, and therefore, type systems cannot prevent computational errors. The result of an IFP calculation is unpredictable, and so routinely fails to produce any expected change to the global state. Still, to completely invert the usefulness of types in functional programming, in IFP, type systems are not syntactic. They are semantic. They are not discarded in compilation. On the contrary, they are one of the few things that exist at run time. With respect to natural thought, types are the abstractions with which we think. Note this point is a bit weak, because I took a bit of a leeway. There’s nothing in functional programming to force its inversion to include semantic type systems. Feel free to ignore.
Functions are used as parameters
As we already mentioned, IFP functions don’t accept parameters, and so we can’t pass a function as a parameter. Still, even within the execution of an IFP function, functions are not referenced in the computation. With respect to natural thought, we don’t think in terms of thought patterns (e.g. when we’re hungry, we don’t think about deduction - we simply eat).
Variables are immutable
As mentioned before, IFP has nothing but side effects, and therefore, if IFP functions are to do anything, they must mutate the variables “in place”. With respect to natural thought, again, thinking changes our mind. Even if we choose to ignore or discard the things we think about, we do this by changing something in our psyche.
Functional programming can be integrated with non functional languages
As mentioned before, IFP functions accept no parameters and don’t return values, and therefore, an IFP computational agent cannot “accept” a value computed in another computational domain. Any injection of data into the IFP computational agent is done through non IFP components, which do not take part in IFP computations. With respect to natural thought, the only way “in” to our mental space is as “raw” data coming through our senses. If a computer computes the value 42, the only way it can share it with us is as a sensed representation of or reference to the number 42.
Functions are small
In IFP, each function can change the global state in its entirety. With respect to natural thought, when we think about something, we can easily forget everything we thought about prior.
Compilers can optimize functional code relatively well
Because IFP computations access the global state often, and result with unpredictable changes, pre defined algorithms such as a compiler cannot optimize them effectively. With respect to natural thought, we don’t optimize our thought process, but rather, use thought to discover tools to make tasks simpler. It is extremely hard for us to think any faster. The few methods that exist (e.g coffee?) are more like “over-clocking” than optimizations.
Functional programs are easy to read
On top of IFP functions being unpredictable, and opaque to the aspects of the global state they mutate, as mentioned before, IFP computation cannot mix with non IFP computation, and therefore, we can’t attach it to a debugger. Only the IFP computational agent knows what’s going on. With respect to natural thought, our thoughts are opaque. The only way to penetrate our thoughts is by observing the changes they make on our body. While the field of psychology attempts to penetrate this barrier of uncertainty, it failed to produce anything like a “thought debugger”, or a “thought IDE”.
Before we continue, let’s review the differences between IFP and imperative or procedural programming
Functions can be predictable
When done “right” imperative programming can be predictable.
Types are not semantic
While imperative programming can avoid syntactic types and refer to all variables as mere addresses, still, run-time mutations of the type system are rare and anecdotal.
Computations include some functional components
Differently from IFP, imperative programming was not conceived as an inversion of functional programming. As such, in imperative programming, we always assume there be at least some pure functions, which serve as building blocks for higher order operations (e.g. the instruction set of the CPU).
Functions can accept arguments
Again, imperative and procedural programming were not designed to invert functional programming, and so they allow passing parameters to functions. It is a good way to reuse code. Even if we consider assembly and machine code, as mentioned before, they pass parameters to CPU instructions.
Functions are reasonably sized
In imperative programming we prefer it if functions do not changes every single global variable. This is not really a requirement, but rather a measure to ensure performance and maintainability.
Discussion
The main difference between imperative programming and IFP is that imperative programming is designed to work with physical Turing machines, while IFP isn’t. I mean, it’s just an idea. Still, considering in IFP routinely accesses and mutates the global state of the computational agent, we should expect it to work well on highly parallelizable and dense networks. As such, it’s not surprising it correlates with natural thought, as the architecture in our brain is a dense and highly parallelizable network of neurons. While this observation is not revealing enough to teach us how to build a machine that works well with IFP, it does suggest we might be on the right track.
If we review the list of IFP principles, we can find repeated themes, such as IFP functions do not accept or return values, IFP functions access and mutate the global state, etc. This is encouraging, because we were aiming to find a simple principle behind natural thought. Still, IFP has a long way to go before it can rival the applicability of the Turing machine computational model. That being said, we should remember IFP needs only to explain general intelligence. It needs not explain the emergence of the wetware on which general intelligence first emerged.
All the same, we need something more. To clarify, it’s possible we found correlations between IFP and natural thought simply because we defined natural thought in a way that suited the claim. We marked the target around the arrow. We need a way to check if IFP holds merit, a way to falsify it, a prediction.
Well, here goes nothing.
As stated before, in IFP, functions accept no parameters and return no value. This means that if natural thought and general intelligence were to follow IFP principles, they should not appear as causal effects between parameters and results. Alternatively, if we see such a causal effect, it isn’t IFP, and so cannot be natural thought or general intelligence. Therefore, being that in neurons we can find causal effects, where neurons fire in response to stimulation, we can deduce that whatever neurons are doing when they fire, it is not general intelligence. Neurons are not firing thoughts. If IFP is the model behind general intelligence, it must be conducted via different mechanisms. Neurons might still be learning how to classify data, but that functionality is different from general intelligence. Quite possibly, neural networks are the non-thinking wetware, which developed IFP capabilities, via a DNA mutation.
There are reasons to believe this prediction isn’t wrong. For example, we know there isn’t a single neuron that can produce or nullify consciousness. If this prediction is correct, this is what we should expect. If neurons are not directly involved in the mechanics of general intelligence, a single neuron should not be that important. If general intelligence requires IFP, it means intelligence sits in a global state, i.e. the unified condition of the entire network.
While proving this prediction does not necessarily prove our brain is an IFP computational agent, it is still quite revealing. To clarify, for years scientist have been puzzled how can the neural network formations in our brain produce thought. According to this prediction, the answer is simple. Neural networks aren’t “thinking”. There’s something quite different going on in our brain, and if we are to “crack” it, we need to look elsewhere, so get busy!
Still, how can it be that the inversion of such a fringe concept like functional programming reveals the secrets of our brain? That’s very unexpected, which for many, would suggest it simply isn’t true. Well, while I agree it’s strange, it’s not that strange. Often, there is a complementary relation between the knowledge a tool abstracts away, and the knowledge necessary to use it. For example, our memory is limited, so we put knowledge in libraries, and once we put ideas in paper, we don’t need to remember them anymore.
With functional programming we replace each little step of computation with an idea. With lambda calculus at its root, functional programming converts computation into a formal system, a system of abstractions. Still, formal systems are bounded to the system they define. For example, we can never convert one lemon into two lemons, if all we have is mathematics. We need another physical lemon. Therefore, if functional programming reduces computation to a formal system, it means all the actual computation is done elsewhere. In modern computers, this is done automatically by the compiler, but in our minds, this is done automatically by the mechanisms of thought. Therefore, if we find functional programming easy to understand, it suggests the mechanisms of thought already know how to do the heavy lifting in this conversion. In other words, the mechanisms of natural thought are complimentary to functional programming.
Still, there’s more to it. IFP is very similar to imperative programming, and imperative programming is used in assembly and machine code. In other words, when we dive into how things are done, how a computational agent actually performs computations, we remove the abstractions and the guard rails, and deal with the nitty gritty details. This principle applies to computers, and as our brain is a computational agent, it’s fitting that it applies to it as well.
Even if all of this is true, there’s still a long way to go. Some of the principles we reviewed, such as semantic types for example, need further unification. Additionally, we need to have a better grasp what the global state is, how computations are performed without a functional instructions set, etc. We can’t build something just by saying it isn’t something else. We need an actual model. Still, IFP might give us a clue where we should go next, and if it does, then, well, it served its purpose.