Friday, June 03, 2011

STREAM : inconsistent : chapter 6 : changing the odds



Before we continue, I wish to make it clear, that what we are to discuss next, is not an attempt to convince anyone to do anything. While it may be tempting to apply the concepts this text suggests, to morality or ethics, it would be a mistake. To clarify, thinking there is a change coming, prior to its unfolding, is prophetic, suggesting of angelic tendencies. Indeed, the limitless inconsistent metaphysical potential of the real gods implies, all angelic messages are equally possible and impossible, regardless if their semantics make sense, or the origins of their angels, and had the real gods “wished it", they could have realized all religious angelic prophecies, so to fit historical, and future events, to the letter. Still, they did not, suggesting the only reason religions embrace inconsistent concepts, is because the real gods “wish it".
The same applies to demon casting and dragons. For example, the real gods can prevent the emergence of the spontaneous demons necessary for the dragonish “training”. In other words, the visibility of the “azimuth” of change, which we suggested in the previous chapter, cannot imply it must transpire. In fact, the realization of this change is paradoxical. To explain, how can we know there would be no more angelic interventions? How is this different from the unfolding of an angelic prophecy?
Such a prophecy implies many changes must transpire. Religions must lose their grip on society. Empirical sciences must lose their objective throne and dominance. In other words, if such events were to unfold, it would suggest, our reality is still angelically “managed”. In short, paradoxically, while the “azimuth” of this change is to release us from our angelic “chains of infancy”, the existence of such prophetic clues, hints this change might be a mirage, and that our conviction it should transpire, is somewhat demonic. Nevertheless, this change embodies a shift away from the angelic method of handling our demonic influences, and hence, if we are to follow a new path, we cannot allow ourselves to follow the same angelic path we wish to avoid, as if we do, inevitably, we would achieve nothing. Therefore, even if this “prophecy” is real, its possible validity cannot provide us with providence. To conclude, if we are to “change the odds” in our favor, we cannot rely on the help of the real gods, at least not in the angelic sense.
We must refrain from angelic behavioral patterns. We must refrain from discussing ethics, as in good and evil, right or wrong. Therefore, we cannot justify proactive actions, against others who do not share our convictions. If we do, we will become exactly the thing we are trying to “change the odds” against. We cannot even afford to hate or patronize such individuals, as essentially, there is no justification for such a “negative” emotional reaction. To explain, the rebellion we discussed in the previous chapter, does not spore due to resentment to any specific religion or morality, but rather from the partiality and falsity inherent to all of them. While the change, to which we were referring, may cause a spiritual vacuum, which we will have to somehow “fill”, still, as we suggested in the previous chapter, we are not the ones, who will cause this change. We will merely be its spectators.
Nevertheless, we should not forget our rebellion is a reaction to the visible falsity of the angelic path. We did not “just happen” to notice this falsity. The real gods made sure this falsity is visible. This visibility is not a personal preference. As we suggested in the previous chapter, there is a historical link motivating our rebellion. The real gods are consistently suggesting, we should shift away from our angelic tendencies, and had they not “wanted” this shift, it would never have been visible. We should never have discovered physical randomness exists. We should never have encountered the ethical and religious paradoxes the two world wars imply. We should never have become this self-centered as a society. Considering this, we can even argue, the thought such change should transpire, is not prophetic at all, but rather the product of our analysis of the historical data we have in the present. To explain, we can claim, that while indeed, our analysis utilized controversial terms such as "gods", "angels", "demons", and "dragons", in actuality, these were merely different names, referring to social and scientific facts, which due to our somewhat "fantastic" terminological translation, opened us a new rational perspective over our reality. To clarify, we can claim, that our cultural and technological progress, deems the religious persuasions of our ancestors, along with the social orders they established, obsolete.
Still, regardless, again, it is not "our job" to “change our world”. If this shift will transpire, it will not be by our doing, nor by our realization it is visible, just as it has been persistently transpiring over the last one hundred years, without our assistance. We do not need to wage a new “holy war”, as the real gods are already “pulling the strings”, which may lead our reality, into the cataclysmic social event, required for the unfolding of this shift. While it is possible, the real gods may use angels to bring forth this cataclysmic social event, still, this is none of our concern. We need not take an angelic path, to make this cataclysmic social event come sooner, as if we do, we will not be eligible to profit from the very message we preach. Our “message” is simply not angelic. If anything, it is dragonish.
To summarize, “changing the odds” does not suggest, changing common religious persuasions. We are merely preparing for a yet unknown cataclysmic social event, which may result with the collapse of all angelic establishments and persuasions, but only "if" it will happen. There are many “ifs” involved here. Again, as Aristotle correctly stated, we simply cannot know what the future will bring, and frankly, it is not such a bad thing. To explain, usually, we plan our future based on existing orders, and therefore, inevitably, most of us will severely suffer from such a cataclysmic social event. Nevertheless, if change is unavoidable, it would be better if we could exploit it to our advantage, rather than be its victims.
This is the notion, to which the statement “changing the odds”, refers. We should better our chances to benefit from such a cataclysmic social event. To clarify, this is not a “plan” how to change our current reality. We wish to change only one possible future. Indeed, we can have no assurances this effort will be worthwhile. Still, arguably, it is better than doing nothing, as it grants us some of the hope we lost, once comprehending the conclusions this text suggests. I mean, unless I am missing something HUGE, once fully understanding and accepting the conclusions this text suggests, the sense of resentment toward our reality is unavoidable. Furthermore, currently, there is nothing we can do about it, and therefore, some hope is definitely a “blessing”.
Indeed, you could claim, I am taking things too dramatically, and that we can simply enjoy life as an “experience”, without analyzing our existence so rigorously. Well, I disagree. To explain, metaphysical inconsistencies exist practically everywhere. Demonic effects are pounding our psyche relentlessly. We cannot obvert these conclusions. This is the extent of the statement “inconsistency must exist”. The question is, “How are we to deal with these demonic effects?” We cannot accept statements such as "simply enjoying life”, without contemplating about our demonic influences, “as is”, as in actuality, what we are claiming, is one of the following:

1. We do not have a plan, how to handle our demonic influences, and nor are we willing to accept the existence of the demons this text describes, allowing our demons to govern our lives, uninterrupted. To clarify, regardless of our beliefs, the existence of the demons we described in this text, is as imperative as the existence of metaphysical inconsistencies in our reality. Therefore, by refraining from casting our demons, we imply, we will follow our demons, regardless of our intentions, and suffer the consequences of their inherent inconsistencies, blindly introducing chaos into our lives. Every plan, which we will base on the validity of our demonically influenced concepts, may fail. When we will reflect on our lives, we may feel shame and regret, for the mistakes we did, by following our demons, and a general sense of lack of achievement.
Sounds familiar? Of course it is. Most of us follow this path. Almost all of us follow at least one demon, ranging from religion, ethics, morality, romantic obsession, or blind following to the inconsistencies, inherent to all sciences. These demons embody our weaknesses, and our source of distress. Obviously, there is no guarantee “a demon will strike”, and destroy our lives, and arguably, it may be possible, to live a full and happy life, while following our demons. However, honestly, this option does not "fit" my experience, and if you, dear listener, feel the same, I suggest you reject this option. If it does not work, then I say, fuck it. We need something else.
2. We attempt to understand ourselves better, so to gain control over our lives. Arguably, this option is somewhat similar to the option, which we already suggested in "Delta Theory", and actually, it is not that different from the dragonish option. Still, as I suggested in the first chapter, "Delta Theory" was incomplete, and so is this option. To clarify, in "Delta Theory" we neglected the effects metaphysical inconsistencies have over our psyche, suggesting the "solution" it proposed is inapplicable to our real lives. Arguably, we can claim, that by introducing the concept of demon casting, we may "complete" our "solution" from "Delta Theory", namely, introspective analysis, devotion to artistic creation, humor, playing games, and perhaps, narcotics. Still, if we reject this completion, or if we reject "Delta Theory" altogether, and resort to other methods of rational reflective analysis, potentially, our introspective methods might prove ineffective, when confronted with our demonic influences. To clarify, as we suggested throughout this text, metaphysical inconsistencies must both exist, and affect our psyche, regardless of our persuasions, and if we neglect this metaphysical fact, essentially, our reflective analysis will be as useless, when confronting our demonic influences, as the lacking course of action, which we suggested in the previous item. Moreover, arguably, such reflective analysis is quite different from simply "enjoying life", suggesting we are somewhat detached from our real psychological condition, meaning, that our reflective analysis is indeed quite lacking, even without suggesting the existence of our demonic influences.
3. We choose to practice delta bestiality, meaning, we attempt to solve all our demonically induced sufferings, by appeasing one type of deltas, be it mindless hedonism (namely, bodily bestiality), mechanizing our lives (namely, logical bestiality), or devotion to fantasies, which we continuously invent, to appease our current hardships and desires (namely, completeness bestiality). Originally, we introduced the concept of delta bestiality in chapter six of "Delta Theory", which you can revisit, if you need clarifications. Moreover, in chapter four of this text, we revisited the notion of delta bestiality, which to remind you, we suggested, is somewhat similar to demonic possession, with respect to the behavioral patterns it makes us exhibit. Still, as we mentioned, delta bestiality is different from demonic possession, because as we suggested, we choose to practice delta bestiality, while possessive demons force their effects on us. Actually, our current course of discussion exemplifies this difference. To explain, as we already hinted, we can choose to practice delta bestiality, to resist our demonic influences, rather than to adhere to them. For example, if our demons increase our susceptibility to bodily desires, we may choose to practice logical bestiality, as a method to enforce discipline over our lives, so to regain control over our decisions, and actually, this method is quite popular, leading individuals, whom originally, were somewhat "decadent", into ascetic lifestyles. Alternatively, if our demonic influences are fixating, suggesting we should be prudent and obedient, we may forcefully attempt to free ourselves from their influence, through hedonist bodily bestiality, or through fantastic completeness bestiality. Moreover, if our demons are spontaneous and proactive, we may attempt to prevent our detachment from reality, by mechanizing our lives through logical bestiality, or "silencing" our "demonic callings", by overwhelming our psyche with sensual experiences, such as sexuality, narcotics (including psychiatric pharmaceutical drugs), obesity, and the likes.
Indeed, in many cases, through delta bestiality, we may be able to overcome some of our demonic influences, while improving the quality of our lives significantly. For example, if our demonic influences led us to a life of crime, obverting our demonic tendencies, may prevent us from clashing with judicial systems, if not saving us from a premature death. Still, again, as we suggested in "Delta Theory", delta bestiality brings its own set of hardships with it, be it dismissal from our social environment (due to our bodily or completeness bestiality practices), or a general reflective sense of discontent, if not shame, for our lack of achievement (due to our logical bestiality practices). To clarify, as we suggested in "Delta Theory", all types of delta bestiality, reduce our cognitive potential, to that of an arguably lesser biological species, while as we suggested in the previous chapter, demon casting allows us to exceed our cognitive capabilities and achievements, beyond all current knowledge, and there are good chances, we will notice it, causing an ever growing retrospective conviction, we wasted our lives in vain. In addition, considering our suggestions from chapter four, it is very possible, our delta bestiality practices, were but another demon, waiting for us to adhere to it, while our original motivations, namely, to resist our distinguishable demons, were but a "trick", luring us to the hands of our real demons. To clarify, because as we suggested in chapter four, demons affect the "filter" of our consciousness, usually, we are somewhat oblivious to their effects on us, and hence, it may be possible, the demons we attempt to resist, through delta bestiality, are but "an excuse", which our real demons persuade us to invent. For example, we can attempt to justify our completeness or bodily bestiality practices, claiming that otherwise, our lives would be dull and mechanical, while in actuality, we were simply too "lazy", to even attempt to discipline ourselves. To conclude, delta bestiality reduces our potential, and may result with demonic possession, irreparably damaging the quality of our lives, and while obviously, we cannot forbid it, still, personally, I find it somewhat distasteful and lacking, with respect to the dragonish option.
4. We follow the path of polytheist angels. We promote the persuasion that concepts exist in the external world, with which we formally justify our way of life. I intentionally did not mention polytheist religions, because as Plato’s philosophy shows us, polytheist religions are merely a subset of this option.
Indeed, such persuasions can convince us, our happy carelessness, is justified. For example, we can choose to follow Dionysus or Satan, so to justify a hedonistic lifestyle. Nevertheless, as we suggested in both chapter three, as well as in “Delta Theory”, metaphysically, such beliefs are invalid. While we may think in terms of polytheist gods, in actuality, we suffer from the effects of possessive inhibitive fixating demons. Still, with respect to the previous item, arguably, such persuasions are different from delta bestiality, as essentially, polytheist persuasions are well-defined (meaning, they do not reflect completeness bestiality practices), and today, they are not that popular, suggesting they may not reflect logical bestiality practices, as logical bestiality tends to undermine our ability to choose, while selecting an unpopular religious persuasion, somewhat reflects our individuality. Still, even without the effects of delta bestiality, and even if possibly, such persuasions allow us to "enjoy life" in the name of our animated concepts, it comes with a price. To explain, we will fail to see the impossibility inherent to our beliefs, and therefore, inevitably, many aspects of our reality, will fail to comply with our persuasion, nourishing an ever-growing resentment toward our reality.
Things could have been different. If polytheist religions were dominant in society, our immediate reality would better support polytheism. However, currently, monotheism is far more dominant. This growing sense of resentment could peak. If it does, naturally, we can change our persuasion. However, how could we justify our careless lifestyle if we do? Still, if we refuse to change our persuasion, what options do we have? Our religious dedication will bring us nothing but loneliness. So how are we to sustain our happy carelessness?
Well, while indeed, today, polytheist religious followers are neither common, nor visible, they still exist, suggesting that theoretically, we can isolate ourselves from society, within a small closed polytheist religious group. This group will serve as our conceptual protection, keeping us both careless, and oblivious to the obvious inconsistencies, between the external world, and our beliefs. Still, the bitterness toward society, inherent to all members of our small religious group (or better said, “Cult”) will induce its own demonic effects. To explain, we cannot justify our shared bitterness rationally, as essentially, it is demonic. However, unlike other demons, our cult will not be able to "protect us" from these demons effectively. To clarify, when we discussed western monotheism, we showed how the existence of other religions, negates the omnipotence of the western monotheist god, and the same applies to our polytheist persuasion. The polytheist gods our cult would cherish, would remain impotent outside the boundaries of our cult, and we will all notice it, enhancing our shared sense of bitterness. Still, because our cult must close its doors to the rest of society, our cult comrades will be able to express this resentment, only on each other. Furthermore, because the cult is hermetic, no external authority could enforce a different morality on our cult, and therefore, inevitably, our original utopia will transform into a cycle of abuse between the members of our cult, which no external authority could regulate. In fact, this can explain past records of cult related cases of abuse and mass suicide. In any case, it does not add up. Polytheist carelessness and happiness is bound to collapse.

5. We are stupid, as simple as that. Our thought patterns resemble more those of an animal lacking a consciousness, than those of a human. We do not suffer demonic effects, because instinctively, we do not think in concepts. We simply fail to understand anything. Still, we should remember, we do not choose to think in concepts. It is in our genes. It is in the automaton governing our animal body, and hence, if our ability to think in concepts is limited, it simply means, our automaton is “defected”. We fail to carry the human genome from our parents. We might be happy, and that will be absolutely wonderful, but we are not really the same as other humans. We do not need to solve demonically induced “problems” as our evolutionary backwardness “saved us the trouble”. This is indeed backwardness (or alternatively, devolution). To explain, as we suggested in “Delta Theory”, our ability to think in concepts came long after the initial evolutionary formation of the brain. Furthermore, our ability to think in concepts came hand in hand with the degradation of our instinctive manners of survival, which motivated humans to think. Therefore, such “happiness” is by no means “progress”. Moreover, arguably, such devolution cannot survive. To clarify, again, our ability to think in concepts, made us masters of the earthly food-chain. However, without our cognitive capabilities, our survival capabilities are undoubtedly limited, and arguably, being this devolved, the only way we may be able to survive, is by relying on more evolved humans, to provide us with our survival necessities. To summarize, while it is indeed wonderful, such stupidity cannot be a model for more evolved humans, much less for dragons, which arguably, are even more evolved than normal humans are. Nevertheless, such cognitive evolutionary backwardness would make it practically impossible to understand this text, and therefore, I guess it does not apply to you, and hence, there is no point discussing this option further.

6. We are lying. We are neither happy nor enjoying life. We just say we do. Well, sorry sis. This text is about consistency, truth, and the likes. Any possible misconceptions we might have about ourselves are too off topic. Still, possibly, our belief we are happy is demonic. If so, then welcome! we are “on the same boat”.

Please note:
I did not discuss the possibility to live carelessly happy, while following a monotheist religion, as the constant sense of moral guilt, on which monotheist religions strive, negates this option. Furthermore, I did not address careless happy atheists, as I believe we can map them to the options we already suggested, namely, items one, two, five, and six.
Arguably, none of the options we reviewed is desirable. Still, again, with respect to our conclusions from the previous chapter, the option we suggested, namely, the "dragonish option", is not much better. To clarify, while indeed, our dragonish tendencies may allow us to handle our demonic influences constructively, by converting our demonically induced “problems”, into elements beneficial both personally, and culturally, still, it comes with such a heavy price, that currently, it is undesirable as well, while following the angelic path, is practical. Potentially, we can mix the two options. To explain, supposing we did not yet reach dragonish demon casting levels, we can “mix” the practice of demon casting, with angelic practices, and actually, because again, today, our society is mostly angelic, usually, if we are to undergo some type of dragonish "training", arguably, such a mixture is unavoidable. Still, the cataclysmic social event, to which we were referring, suggests the angelic path will no longer be available, and in such an event, either we will suffer from distress, or demon cast our way, out of our demonically induced sufferings. Nevertheless, currently, excessive demon casting results with distress, and hence, it is undesirable, causing a sense of paradox, where we are driven to distance ourselves from the grip of our demons, while at the same time, feeling a sense of longing, to some type of an angelic embrace.
If we are to find hope, we must better this disposition. We cannot accept, that when facing the “problem” of a global angelic collapse, the dragonish “solution” will still result with loneliness, and depression. We cannot, and we should not, as it is our current angelic perspective, which deems the dragonish “solution” as such. To explain, angelic perspectives do not consider demon casting as either beneficial, or significant. On the contrary, angelic perspectives are antagonistic to the practice of demon casting, fearing we will cast the inhibitive fixating demon they endorse, and actually, we can find historical records, which hint this antagonism. For example, in Judaism, the second commandment, “Thou shalt not make graven images”, suggests a religious antagonism toward artistic creation. While arguably, the motivation behind this commandment was to prohibit idol worshipping, this is not what the commandment says. It clearly prohibits the act of creation, not the act of worshipping. Indeed, the first commandment prohibits worshipping different gods. Still, it is a different commandment. It is a general prohibition. It refers to various religious activities, and not merely to artistic creation. The link between artistic creation and worship, suggests a link between the elements involved in both practices, meaning art and religion, and actually, by now, such a link should not surprise us, as demon casting can yield them both.
Surprisingly, idol worshippers were antagonistic to demon casting as well. For example, in “The Republic”, Plato banned talented artists from entering his ideal state, in fear that the artistic muses will disrupt its perfect order. This is untypical. Ancient Greece was polytheist. The ancient Greeks were already idol worshippers. Moreover, the ancient Greek civilization cherished the arts.
At first glance, the motivation behind Plato’s suspiciousness toward artists, appears to be different from that of Judaism, but it is not. To explain, Plato feared artists would challenge the Ideal Rationalism, which was to govern his "ideal state", suggesting he failed to comprehend, his own Ideal Rationalism, was demonic. To clarify, regardless if our analysis of metaphysical inconsistencies was rational and consistent, as we suggested in the first chapter, metaphysical inconsistencies must exist, and therefore, rational consistency cannot fully explain the nature of our reality. Therefore, Plato’s Ideal Rationalism is as inconsistent with our reality, as the Jewish persuasion, or any other inhibitive fixating demon for that matter. Plato feared artists would demon cast his Ideal Rationalism, just as Judaism fears artists would demon cast the western monotheist god, and frankly, this fear is justified, as demon casters can obstruct any belief system.
Still, Plato’s Ideal Rationalism is but one example. Generally, practical sciences do not reserve a special place for demon casting. Demon casting simply does not exist in the scientific jargon. While arguably, the practice of demon casting yielded some of humanity’s greatest scientific breakthroughs, we only remember its “fruits”. We remember the individuals, who discovered these scientific breakthroughs, as geniuses, while the cognitive methods, by which they discovered them, we consider little more than gossip. The reason for this is simple. As we suggested in the previous chapter, demon casting is not a well-defined practice. The infinite contingent dimensional variety, in which our demons may emerge, forces the same infinite variety on their methods of casting, suggesting there can never be a prescription, "how to demon cast". Simply put, demon casting is not a scientific discipline. Still, for dragons, it is not only a practice. It is a way of life.
Therefore, it is no wonder, the dragonish option leads to loneliness and distress, as essentially, none of its important features is either respected, or even recognized. Moreover, because demon casting utilizes analysis and synthesis of inconsistent demonically induced concepts, society dismisses our dragonish achievements. To explain, from an external perspective, dragons appear very much insane, causing our social environment to mock our motivations to demon cast. For example, if our motivations to demon cast, spawned due to misfortunate events, which left us puzzled, most probably, our social environment would consider our efforts, as little more than misguided grievance. Arguably, our dragonish achievements may tone down this condescending attitude, but only to some degree.
Naturally, dragons do not appreciate this attitude. To explain, let us consider the perspective of a dragon. We fought our demon, and beaten it, achieving something truly impressive in the process. Still, in the end, our social environment thinks we are insane. What an outrage. Indeed, before we demon casted, our demons still had a “grip” over us, and hence, arguably, we were somewhat “insane”, but now we are healed. Nevertheless, our social environment still perceives us as mentally sick, or unstable, and therefore, it is no wonder, such outcome increases our sense of alienation, loneliness, and inevitable depression.
To conclude, the reason that currently, the dragonish option is unattractive, has little to do with any inherent flaw of demon casting. In fact, its main problem is that no one considers it as an "option" at all. Again, contemporary society does not accept the demons we described in this text exist, and therefore, it does not consider demon casting is a virtue. Society mistakably identifies demons as angels, fabrications invented by angels, or insanity. However, in the event of a global angelic collapse, angelic perspectives will collapse as well, while the dragonish option will remain "untouched". Still, as we just explained, “untouched” is not good enough. The dragonish option is unattractive. It has not yet matured. Therefore, to “change the odds” so that such angelic collapse will signal a transition to the dragonish option, we need to nourish it. We need to prove the dragonish option is sound, and the only way to achieve such a goal, is by empirically proving its metaphysical foundation. Empirically proving the metaphysical foundation, which both "Delta Theory", as well as this text suggest, will validate our conclusions, including the beneficial aspects of the dragonish option, and yes, this is what the phrase, “changing the odds” implies.
Still, how can we possibly do that? To clarify, in the introduction chapter of “Delta Theory”, we already explained, why we cannot validate metaphysical theories, as there can always be untraceable elements, which affect our reality, and arguably, consistent metaphysical theories are even worse, as their consistency suggests, we cannot even refute them. So how can we validate any metaphysical theory?
Well, we do not have to validate our metaphysical foundation in its entirety. We can satisfy by validating its completeness criterion, meaning, the manner by which sensations, constructs of sensations, thought, sensory, repression, and nullification, emerge in our consciousness. To explain, in the introduction chapter of "Delta Theory", we suggested, that being self-aware, we can only validate the existence of our self-awareness, as from our perspective, the ontological existence of any other element, is but an assumption, or alternatively, a belief. Therefore, arguably, we should only validate the portions of our metaphysical foundation, which refer to our self-awareness, of which all reflect in the manner "Delta Theory" answered the completeness criterion, along with the corrections we added in this text (namely, the attributes of metaphysical inconsistencies, as they reflect in our consciousness, or alternatively, our demonic influences). In addition, we should note, “Delta Theory” may require future corrections, as surely, it overlooked many issues. Moreover, as I suggested in the last chapter of "Delta Theory", new scientific discoveries may demand we revisit our metaphysical assumptions, and therefore, generally, limiting our validation to our essentials, is good practice, as essentially, even such a humble argumentative validation, is enough to affirm demons affect our psyche, and hence, is enough to justify the usefulness of demon casting. Assuming our effort will be successful, validating the dragonish option is beneficial, should be trivial.
Still, we should understand the implications of our effort. To explain, supposing our metaphysical foundation is consistent, philosophically, there is no point for future discussion. We can leave it "as is", meaning, as yet another philosophical "option", residing in the vast archives of human meditation. To clarify, theoretically, we are already "there". Nevertheless, somehow, it does not change our immediate disposition. Therefore, regardless of our personal confidence in our metaphysical foundation, and more specifically, in the manner it answers the completeness criterion, we need to "extend" our confidence, beyond the "scope" of our beliefs, and onto the "scope" of the external world, meaning, we need to prove the completeness criterion empirically. Therefore, considering our previous conclusions, that metaphysical inconsistencies affect us, by utilizing the dimension of consciousness, and its ability to inflict changes, whose causative justification originates from irrelevant worlds, in practice, to “change the odds”, we need to build machines, which utilize the dimension of consciousness, or alternatively, conscious machines, and whose performance we could measure empirically. If these machine would “work”, meaning, if they would utilize the dimension of consciousness, when determining the actions they would perform in the external world, it would affirm the metaphysical foundation by which we designed them, and therefore, affirm the dragonish option.
Still, obviously, this will not be easy. Many things can go wrong. For example, it is possible, the manner, by which “Delta Theory” satisfies the completeness criterion, is fundamentally incorrect, and hence, obviously, we will fail to build any machine, whose feasibility relies on its validity. Still, even if so, we should not worry about this option. On the contrary, it is a good thing. To explain, disproving the manner, by which “Delta Theory” satisfies the completeness criterion, discredits its metaphysical foundation, including our recent conclusions, with respect to gods and religions, which arguably, is a blessing. To clarify, it would have been better for all of us, if our reality was different from the reality this text reflects, a reality in which we could have trusted our gods, to provide us with providence, rather than inconsistent metaphysical tyranny, a reality, in which the dragonish option would have been unnecessary. Therefore, essentially, refuting “Delta Theory” would be a valuable positive achievement on its own, regardless if it would imply, our efforts to construct conscious machines would fail. Nevertheless, refuting a metaphysical theory can be as difficult as validating it, and therefore, generally, we should not expect much trouble on this front.
Still, even if our metaphysical basis is sound, and even if we manage to build such machines, how will we determine if we achieved anything at all? To explain, as we suggested in "Delta Theory", metaphysically, elements, which the dimension of consciousness hosts, are irrelevant to elements existing in the external world, meaning, physical elements, suggesting we cannot validate such machines utilize the dimension of consciousness, using materialistic instrumentation. Moreover, even though our internal mental world, consists of contingent dimensions, as well as the dimension of consciousness, arguably, our consciousness cannot intrude any such alternate internal mental worlds. To clarify, regardless if the dimension of consciousness is universal, meaning, that we all utilize it, as the metaphysical enabler of our self-awareness, our consciousness consists of contingent dimensions as well, of which all are both somewhat unique to our individual consciousness, as well as independent of all other contingent dimensions, imposing a metaphysical limit, beyond which we can validate nothing, while the manner such conscious machines would utilize the dimension of consciousness, could only be evident beyond this metaphysical barrier. Actually, in chapter two, we already discussed a similar issue, suggesting we cannot validate the self-awareness of anyone else but our own, let alone that of a machine. Still, as we have just shown, we cannot even measure the contingent dimensional condition of such machines, regardless if they are self-aware or not. However, if we would be able to validate neither the self-awareness of our future machines, nor their utilization of the dimension of consciousness, then how are we to utilize them to prove anything?
Actually, our challenge is even greater. To clarify, again, according to "Delta Theory", metaphysically, all contingent dimensions are independent of one another, and therefore, theoretically, there is no limit over the amount of contingent dimensions a particle may span. Therefore, even if we could measure the contingent dimensional composition of the particles, of which our future machines would consist, potentially, each of these particles may span an infinite amount of contingent dimensions, and hence, even if metaphysically, we could "access" their contingent dimensional composition, we would simply not have the time to go over the data. Actually, this difficulty exemplifies yet another reason, why we cannot validate our metaphysical foundation through other means, such as through measuring physical randomness, or experimentations with life forms, as essentially, according to our metaphysical foundation, contingent dimensions govern them, which again, we simply cannot measure.
So how are we to overcome this metaphysical barrier? Well, we need to broaden our perspective. To explain, if we are to prove anything, with respect to either the dimension of consciousness, or contingent dimensions in general, we cannot use the standard deterministic model for experiments, in which we have preconditions, predicted post conditions, which we compare with the actual post conditions. No. We must remove the pre-conditions, use our metaphysical foundation to predict something inconceivable, and then, empirically, show it transpired.
The removal of preconditions is mandatory to all experiments conducted over contingent dimensions. For example, with respect to life forms, suppose we conduct an experiment, to test whether passing an electrical current through a dead animal, brings it back to life (such as in the famous story of Frankenstein). Even if the animal would come to life, we could not determine the role contingent dimensions played in our experiment. Our traditional empirical intuitions would persuade us, this revival occurred due to the electric currents we passed through the corpse of the animal, regardless of the level of biological decay it exhibited prior. Still, this would not imply, contingent dimensions did not play a factor, but rather that because our scientific instrumentation, failed to measure contingent dimensional data, we could not include it in our analysis of the results of our experiment.
Supposing that indeed, contingent dimensions both exist, and affect our reality, while remaining "inaccessible" to our scientific instrumentation, again, we must conduct experiments, where we could provide no materialistic explanation to the empirical results we would measure, other than our metaphysical foundation, showing our metaphysical foundation is more "exact" than our scientific instrumentation. However, how can we conduct an experiment using elements our instrumentation cannot access? How can we call it an experiment at all?
But this is exactly the point. To clarify, think what we attempt to prove. We attempt to show, that even though we cannot measure it directly, our reality consists of dimensions. We cannot conduct experiments over the imminent dimensions, as to experiment with them, we must be able to isolate them from our instrumentation, while according to our metaphysical foundation, the imminent dimensions are the metaphysical enablers for the existence of our scientific instrumentation.
Still, fortunately, contingent dimensions are accessible to empirical study. To explain, indeed, we cannot measure contingent dimensional composition with scientific instrumentation. Nevertheless, we can measure their effects on our reality. In fact, it is possible that accidentally, science has already measured their effects, while discovering physical randomness in nature. Indeed, this is only a hypothesis, as empirically, we did not yet prove contingent dimensions cause the emergence of physical randomness in nature, or anything of the sort. Nevertheless, supposing we would construct artificially conscious machines successfully, arguably, we may be able to affirm it. Again, regardless of their somewhat "confusing" name, our attempts to build such machines, does not spawn from our desire to build artificial intelligence, and most probably, our crude designs will yield machines of questionable intelligence, if we could call them "intelligent" at all. Still, their construction would prove valuable, due to the following reasons:

1. We will base the design and architecture of these machines on a metaphysical foundation, which we conceptualized by analyzing our consciousness (meaning, the manner, by which "Delta Theory" constructed its metaphysical foundation). This is important, as it will allow us to deduce, that the metaphysical principles we would incorporate in the design of our artificially conscious machines, should be true to human consciousnesses as well.

2. Our metaphysical foundation would explain why these machines should both work, and share some of our cognitive capabilities, prior to their successful construction. This is important, as it provides us with the necessities, which we require to validate our metaphysical foundation. To explain, by showing these machines work as we predicted, while utilizing attributes scientific instrumentation cannot measure, we will show, the flow of consistency in our reality, exceeds the inherent limitations of our scientific instrumentation, exhibiting its metaphysical limitation. In simpler terms, it will show contingent regularities exist in our reality.

3. Regardless of the manner our future machines would utilize the dimension of consciousness, naturally, our machines would be artificial, rather than "natural" life forms, and hence, their successful construction would hint, that exactly as we suggested in "Delta Theory", metaphysically, life, and consciousness, are irrelevant to one another. Indeed, in itself, the significance of such a claim, is quite marginal. Still, arguably, it may add argumentative robustness to our effort, as well as help us obvert possible moral prejudice, so common to any discussion, with respect to technology, and the human psyche.

4. We have no other prior empirical knowledge, with which we could justify the feasibility of the technology, which we would incorporate in these machines, suggesting the successful construction of such machines, should signal an immediate metaphysical paradigm shift in practical sciences. This is important, so that in the event we would succeed, the scientific community will not attribute our success, with an explanatory model, which would refrain from validating the dragonish option.


While arguably, this collection of items appears promising, still, there is a problem. To explain, indeed, in “Delta Theory” we attempted to answer the completeness criterion, by analyzing our experience as consciousnesses rationally, and consistently. Still, it is possible, our consciousnesses possess attributes, which are incompatible with consistent or rational analysis, suggesting the manner our future machines would utilize the dimension of consciousness, would be fundamentally different, from the manner our consciousnesses utilize it, allowing us to argue against our metaphysical foundation, even in the event we will construct such machines successfully. Actually, this is more than but a vague speculation, as arguably, throughout this text, we have shown, the existence and effects of irrational and inconsistent elements over our psyche, namely, demons, and there may be others. To clarify, in no way do I claim, this text embodies all there is to discuss, with respect to metaphysical inconsistencies over our psyche.
The truth is, we cannot base our actions on our lack of knowledge, and hence, arguably, we may not be able to overcome this difficulty. Still, we should remember our motivation. We do not require conscious machines, as had we merely wanted to populate our reality with more consciousnesses, we could have just as well reproduced, or utilize existing cloning technologies. Furthermore, again, our main motivation is not to validate our metaphysical foundation in its entirety, but rather to affirm it is more "exact" than the current materialistic metaphysical paradigm of empirical sciences, and arguably, constructing our future machines successfully, while basing our effort on a theoretical foundation, fundamentally different from that of contemporary empirical sciences, would successfully achieve this goal. Moreover, as we previously suggested, we would not restrict the metaphysical foundation, according to which we will design our future machines, to the one we suggested in "Delta Theory", but rather extend it to our conclusions from this text, meaning, we would construct our machines, so they would suffer from demonic influences.
Actually, considering our conclusions from this text, there is no other option. To explain, first, because as we suggested in chapter two, inconsistent elements can alter the contingent dimensional composition of any particle, being physical, meaning, consisting of particles, potentially, any machine is susceptible to the effects of inconsistent elements. Still, naturally, our goal is not merely to build machines, which would exist physically, and nor is it to build machines, whose existence would be consistent with the existence of the dimension of consciousness. Our aim is to build machines, which would utilize the dimensional hosting capabilities of the dimension of consciousness, so to relate to events occurring in a world, external to these machines, and so to be able to react to their external conditions. Therefore, essentially, we attempt to build machines, which would cause irrelevant worlds, to become relevant to one another, suggesting our future machines would utilize metaphysical inconsistencies of some sort. However, because only inconsistent elements may yield metaphysical inconsistencies, it suggests, our future machines must somehow incorporate inconsistent elements into their design. Therefore, because we are to utilize the dimension of consciousness, and because as we suggested in the first chapter, inconsistent elements must always exhibit all the attributes of metaphysical inconsistencies, it implies, the assimilation of inconsistent elements into the design of our future machines, would result with demonic effects.
Still, this is neither a drawback, nor a problem. On the contrary, we will “exploit” this imperative, to our advantage. To explain, as we suggested in the previous chapter, our demonic influences, along with our cognitive ability to demon cast, allowed us to progress both culturally, and technologically, beyond the limit of any consistent cognitive progress, and possibly, we could utilize a similar inconsistent cognitive progression, as the inconceivable feature of our future machines, which will mark our effort successful.
Still, arguably, such a suggestion, may damage our effort. To explain, as we suggested in the previous chapter, computer sciences have already proved, that utilizing truly unbiased randomness, we can solve problems using probabilistic algorithms, using significantly less calculations than deterministic algorithms require. While indeed, such probabilistic algorithms may error, still, with but a few additional calculations, we can reduce our margin of error, to any finite value. Therefore, considering our future machines would utilize contingent dimensional manipulations, through the manner they would manipulate the dimension of consciousness, we could argue, that the reason their performance would exceed our expectations, is merely because of the random component determining their operations. To remind you, empirical sciences have discovered this already, without initiating the scientific paradigm shift, which is the cause of our effort, and hence, it suggests, potentially, our effort will fail to bear fruit.
To summarize, our future machines should exhibit more than merely unpredictable superior performance of some sort. We require a "stricter" success criterion, one that would exemplify the difference between the metaphysical foundation we attempt to validate, and the current metaphysical paradigm of empirical sciences.
Still, what could this criterion be? Well, if you remember, in chapter two, we argued, that because physical randomness is the only aspect of our reality, which according to our current scientific paradigm, we will never be able to predict, if it does not adhere to contingent dimensional regularities, it deems our self-awareness neither a consistent element, nor an inconsistent element, and hence, we concluded, contingent dimensional regularities must govern it. Therefore, to counter the existing scientific paradigm, which rejects the notion of contingent, or alternatively, local dimensions, we must somehow exemplify, our future machines did not merely surpass their theoretical capabilities in a random fashion, but rather in a fashion we will predict, in advance.
In chapter four of "Delta Theory", we already discussed the manner, in which contingent dimensions affect our reality (namely, governing the manner our world persists, in the small volume of space, which a particle occupies), as well as the type of physical events, whose outcome they determine (namely, collisions between particles). Our future machines will incorporate these yet unknown, and immeasurable, attributes of matter. Still, being contingent, or alternatively, local, we can exploit these attributes only to some degree, as the metaphysical intangibility of contingent dimensions, imposes a limit over the type of architectures, which can exploit them. Moreover, most probably, they will remain intangible, if not utilized in a manner, similar to the one we will soon describe. Actually, this in itself may impose a challenge. To clarify, because the attributes of matter, which we will exploit in our future machines, are intangible to materialistic instrumentation, which does not incorporate an architecture, similar to that which we attempt to build, until we will successfully complete our effort, we will not be able to justify it empirically. Before then, it is all a question of faith.
Still, this is not the end of our problems. To explain, currently, it is hard to determine, how much of our future artificially conscious machines will be of our design, and how much of their design, we would have to borrow from nature, “as is”. To clarify, as we suggested throughout this text, it is possible, our sensations of pain occur due to the insertion of inconsistent elements into our internal mental world, through the dimension of consciousness. Actually, it may not merely be possible. It might be imminent. To explain, according to "Delta Theory", our sensations of pain cause the emergence of all our deltas, inflicting us with our wills and desires. These wills originate from the external world, which as we suggested, metaphysically, is irrelevant to our internal mental world, and hence, it suggests, our sensations of pain reflect the existence of metaphysical inconsistencies in our psyche. Still, because from our experience, we know our animal body can inflict us with sensations of pain (such as for example, when making us feel hunger), it suggests, our animal body manages to manipulate metaphysical inconsistencies, a capability, which currently, we can only dream of mastering. Therefore, this may extend the scope of our effort, from "merely" mastering contingent dimensional technologies, to mastering metaphysical inconsistencies as well, imposing a tougher challenge, which potentially, could demotivate us from answering it in its entirety. Instead, due to pragmatic limitations, we may be forced resort to "partial" solutions, ones that will mix our own design, with the "designs" we find in nature, such as those existing in the neurons in our brain, for example.
Indeed, such "partial" solutions may significantly reduce the challenge, which constructing such machines may impose. However, because we will not understand the metaphysical essence of the components, which we will incorporate within our future machines, potentially, we may not be able to understand, why our machines "work", which in turn, would deem our effort pointless. To clarify, again, we need neither construct artificial consciousnesses, nor construct artificial self-awareness, but rather validate our metaphysical foundation. Moreover, arguably, by incorporating components, which we would borrow from nature, we could argue, our effort would not yield machines at all, but rather reconstructions of existing biological organs (as small as they might be), not so different from contemporary implant procedures, further exemplifying the difference, between the product of our effort, and the goal to which we aspire.
Still, this is but one possibility, and it may also be possible, that once we begin our actual effort, we will find ways to overcome this challenge, and actually, with respect to this specific difficulty, there may be a reason for some optimism. To clarify, because apparently, all humans react to pain sensations, it suggests, the insertion of pain sensations into our psyche, utilizes a universal feature of the external world, or alternatively, reflects the regularity, which one of the imminent dimensions sustains, and actually, in "Delta Theory", we already suggested this possibility. If so, this challenge may be relatively easy to master, as essentially, all our contemporary materialistic machines, utilize the imminent dimensions. Therefore, while indeed, without further experimentation, it is hard to determine if this is indeed the case, still, for the course of our current discussion, we will assume this challenge is solvable. Moreover, arguably, the insertion of metaphysical inconsistencies into our future machines, through a process, somewhat similar to the one transpiring in our brain whenever we feel pain sensations, may be the key ingredient, enabling the successful completion of our effort.
In what way? Well, as we previously mentioned, while verifying the existence of our self-awareness is trivial, we cannot verify the existence of self-awareness in others, let alone in a machine. Moreover, we cannot know in what manner, our future machines would utilize the dimension of consciousness. Nevertheless, we can predict, that if our future machines would utilize the dimension of consciousness, in a manner similar to the manner our neural biology utilizes it, the reactions of our future machines, to the introduction of metaphysical inconsistencies into their "conscious" components, meaning, their components, which would utilize the dimension of consciousness, should be similar to our reactions. For example, we should expect, our future machine would attempt to avoid such metaphysical inconsistencies, similarly to the manner we attempt to avoid sensations of pain.
We would exploit this feature, to affirm our metaphysical foundation. To explain, this "feature", would reflect in the “test”, which our machines would have to pass, so to mark our effort successful. To be more specific, the "test" our future machines would have to pass, would be to evolve to exhibit the ability, to cast their spontaneous demons intelligently. Indeed, as we already agreed, such a test may not validate our metaphysical foundation in its entirety. Nevertheless, arguably, it may provide us with something far more beneficial:

1. By constructing machines, which are somewhat intelligent, we will validate a link between intelligence, and the explanation “Delta Theory” suggested, for the emergence of our consciousness.

2. By utilizing contingent dimensional technologies, we will show, that contingent dimensions both exist, and are capable of yielding intelligence. This is important, because as we will soon see, the architecture we are to describe, utilizes existing discoveries in neuroscience, which currently, does not claim contingent dimensions are necessary for the emergence of human intelligence, and which potentially, could argue against the necessity to resort to metaphysical discussions, with respect to explanations for the emergence of our intelligence. In other words, our future machines will counter existing assumptions in neuroscience, exemplifying the necessity of our current course of discussion.

3. By showing our machines suffer from demonic influences, we will show, the attributes of metaphysical inconsistencies, which we predicted in the first chapter, are compatible with the design of our future machines both theoretically, and empirically.

4. By showing our machines demon casted intelligibly, empirically, we will show, that demon casting is both "natural" to an intelligence, which we would construct by analyzing our consciousness, as well as means, by which our machines enhance their cognitive capabilities, exactly as we predicted in this text, suggesting it is applicable to our reflective analysis.

5. Because we will base most of our future machine's design on our conclusions from "Delta Theory" (which to remind you, suggested our consciousness is a consistent element), in many ways, demon casting is external to their design, and hence, its spontaneous emergence would reflect a predicted unexpectedness, suggesting that as we have shown throughout this text, metaphysical inconsistencies are not chaotic, but rather exhibit somewhat predictable tendencies.

To summarize, this test allows us to “skip” the problematic aspects of validating a metaphysical theory. It affirms demon casting is a viable cognitive practice, while relieving us from the need, to prove why it is so, by fulfilling several of our predictions, which according to existing scientific paradigms, should not occur, namely:


1. We will show, we can consistently manipulate elements, which we cannot measure with scientific instrumentation (meaning, contingent dimensions), so to yield deterministic physical changes we can measure. In other words, we will extend the metaphysical scope of exact sciences.

2. We will show, we can physically construct a working machine, based primarily on metaphysical principles.

3. We will show, the capabilities of machines, which utilize the technology we would incorporate, surpass their initial design (meaning, by demon casting).

Now that we have described the essentials of our effort, it is time we shift from theory to praxis. How will we do it? How will we summon this predicted unexpected outcome? Well, actually, again, similarly to the changing "tendencies" of metaphysical inconsistencies, once we construct these machines, we will not need to do much. We will leave these machines in a sensory vacuum, allowing spontaneous demons to influence them, until eventually, without exterior provocation, these machines would yield objects, whose existence we could only explain as demon casts. While possibly, such demon casts would reflect some type of randomness, still, the intelligible internal order, as well as the level of detail, which these objects would exhibit, would suggest, they are complex, rather than chaotic, and as such, they will exemplify, physical randomness encapsulates an abundance of unique deterministic processes, rather than the absence of deterministic causality. Furthermore, because we cannot predict the semantics, by which we will interpret the details these objects would exhibit, we will show that potentially, their complexity is infinite.
Still, as we just mentioned, these details must be intelligible. To explain, randomness yields details by default, while our effort demands, we exemplify these details reflect determinism of some sort. Therefore, in addition, we must be able to show:

1. The details of the objects our machines would yield, would reflect their internal state. This is important, so to refute the possibility, these details represent the imprint of accidental randomness, rather than a deterministic process.

2. Producing these objects should come hand in hand with increased performance. To clarify, indeed, we have not yet defined what actions or calculations our future machines would perform. Still, such increased performance is necessary to validate both the reflective meaningfulness of these objects to the machines producing them, as well as the positive cognitive aspects of demon casting.

3. The objects our future machines would produce, would exhibit intelligible patterns, which would reflect their temporal internal states, rather than their architecture or implementation. This is important, so to refute any claim, suggesting a dependency between the objects our machines would produce, and the manner by which we designed them. To explain, generally, machines reflect the algorithms they obey, meaning, they reflect the manner their operation changes, with respect to their external and internal states. Therefore, potentially, by changing the internal and external states of our future machines in a random fashion, they may shift between the various states of their operation, and hence, any object they may produce, could reflect these state transitions. In other words, by allowing a seemingly random agent to govern our future machines, we may allow them to exhibit the complexity of the design by which we constructed them, which potentially, we may confuse with intelligible details these machines yielded spontaneously. For example, if we consider a calculator, we may confuse its ability to display digits, with the ability to comprehend the semantics of the values it calculated, wrongly suggesting our calculator understands the physical implications, of the values it calculates and displays, such as distances, amounts, and weights. Still, by remaining vigilant to the possibility, our future machines would exhibit the design by which we will build them, we will ensure that indeed, our machines exceeded the capabilities inherited from their design, implying the contingent metaphysical manipulators, which we will incorporate in their design, serve as a placeholder for unexpected capabilities, rather than merely chaotic data generators.

By satisfying this list of necessities, arguably, we may affirm the metaphysical link, between physical randomness, and a deterministic process, exemplifying physical randomness does not reflect chaos, but rather a metaphysical placeholder of infinite complexity. From this point on, the empirically measurable feasibility of this placeholder, will "do the math" for us.
Still, what does it mean? Well, supposing we have a computer, containing a 5-tera bits hard drive. Because each of the bit the hard drive stores, can be in two different states (namely, zero, or one), naturally, there is a physical limit to how small its hard drive can be. To explain, at the very least, it must occupy five billion particles. If we wish to store even one more additional bit in our hard drive, we are out of luck. We cannot. Therefore, considering any machine occupies a finite amount of particles, it suggests, no machine can store data of infinite complexity. Moreover, even if we had an infinite amount of particles at our disposal, how will we access them? To clarify, to access each bit, we must physically go over an infinite amount of bits, and this alone will take us forever, no matter how fast we can read a single bit. Therefore, by showing randomness reflects detail, rather than lack of definition, and considering we cannot predict these details with any finite amount of deductions or calculations, we will affirm our hypothesis, namely, that physical randomness reflects detail of infinite complexity, which is not materialistic, suggesting that metaphysically, consistency extends beyond the scope of materialistic manifestations. Moreover, supposing our future artificially conscious machines would react to this randomness by demon casting, it would imply, they were susceptible to an infinite amount of details.
Still, how could they do that? To clarify, have we not just suggested, accessing data of infinite complexity should take forever? Well, this could only be possible, if our future artificially conscious machines will react to this infinite amount of details at once. If not, it would imply, reacting to a single detail requires a minimal non zero temporal duration, and considering that potentially, the amount of details we discuss is infinite, the duration it should take to react to such details, must be infinite as well. Indeed, we could claim, our machines react to merely a finite subset of these details, and actually, considering our inability to "access" the contingent dimensional states of our future machines, there is little we can do to refute such a hypothesis. Nevertheless, if the operation of our future machines would be finitely complex, it would suggest, their operation would remain predictable, and hence, arguably, they would fail to exceed the capabilities they inherited through their design, as they will be limited to a predetermined scope of behaviors. Actually, this emphasizes the need to verify, the products of our future machines would satisfy our list of necessities, as essentially, poor demon casting capabilities will not do.
Still, comprehending the essence of our effort, we must refine the way we perceive physical randomness. To explain, first, if physical randomness merely reflects our inability to measure the materialistic attributes of matter, our machines will suffer no demonic effects. To clarify, demonic effects occur due to inconsistent changes, and without them, our future machines would remain static, when no external stimulations influence them. Naturally, we could argue, our machines could undergo changes, due to quantum interactions with particles, which originated from their external environment. To clarify, according to empirical research, apparently, some particles are so tiny, they manage to penetrate practically any materialistic barrier, before finally colliding with a particle, such as for example, the particles, of which our future machines would consist, causing external interferences, which potentially, could discredit any claim we may deduce from our effort. Therefore, while testing our future machines, we must ensure, we can isolate them from any particles, radiation, fields, and any other type of external causality, and actually, this feat alone may impose a tough practical challenge.
Secondly, we must comprehend the implications of the alternative, namely, that physical randomness occurs without causative justification, meaning, that physical randomness reflect a physical inconsistency, rather than merely the effects of metaphysical inconsistencies, residing outside the metaphysical boundaries of the external world. Again, as we already suggested in chapter two, such a notion is problematic, as it suggests, all particles are inconsistent elements, and hence, it implies that metaphysically, matter is capable of all the attributes of inconsistency, meaning, it can be everywhere at once, it can disappear without a trace, it can spontaneously appear without justification, and most of all, it does not adhere to any regularity, suggesting it should not adhere to the laws of physics, contrary to our empirical knowledge.
Arguably, we may decide to embrace this argument, suggesting that the only reason matter adheres to the laws of physics, is because currently, the real gods "wish it", and actually, considering our conclusions from chapter two, with respect to the metaphysical capabilities of the real gods, allowing them to change the trajectory and essence of particles at will, there may be some truth to such a suggestion. Nevertheless, such a suggestion is even more farfetched than our argument, namely, that contingent dimensions determine what we perceive as physical randomness, consistently. Therefore, considering that arguably, this alternative is even less plausible than our suggestions, for the course of our next discussions, we will assume, contingent dimensions of potentially infinite complexity, deterministically determine the physical behaviors of particles, which currently, we perceive as random. If you require further clarifications, with respect to this issue, I suggest, you revisit chapter four of "Delta Theory", as I find little value in repeating the same arguments in this text.
As I mentioned in the last chapter of "Delta Theory", it is unclear why currently, empirical sciences have not considered the metaphysical imperative, suggesting the existence of contingent dimensions. Still, this is not really a problem. On the contrary, we will exploit this difference to our advantage. To clarify, by successfully implementing our conclusions from "Delta Theory" in the design of our future machines, we will further affirm our metaphysical foundation is sound, showing our arguments are more robust, than those of contemporary empirical sciences. In other words, we would enhance our arguments, from merely relying on the validity of the manner "Delta Theory" satisfied the completeness criterion, to validating the greater explanatory corpus, which "Delta Theory" embodies. This validation will apply to almost any conclusion, relying on our metaphysical foundation, including the dragonish option, ensuring that in the event of a global angelic collapse, the dragonish option will be both viable, and available. In fact, it might even become a desired practice, suggesting that indeed, we would successfully “change the odds” in favor of a positive outcome. We will free ourselves from our angelic “bonds of infancy”.
A new age will begin. While indeed, the real gods would still inflict inconsistent changes on our reality, through our demonic influences, mentally, we will be strong enough to exploit these influences to our advantage. We will require the help of neither religions, nor their oppressive moralities. We will exploit the full potential of our neurological biological capability to demon cast, progressing further and further both culturally, and technologically. It would be an age of dragons.
This is not a rational idealism. This is not a false angelic utopia. Rational and irrational thoughts will demonically transcend to us from the real gods, inspiring us, rather than oppressing us. Such an age will be very different from our present. We can know little about it in advance, other than that apparently, it is consistent with the "new tendencies" of the real gods. It may be a harsh age to endure, and it may be blissful. Still, undoubtedly, it will be inspiring.
Now that we have asserted both the motivation to build artificially conscious machines, and our methods to benchmark them, it is time we discussed their architecture and technology, beginning with their "building blocks", which we will call, "dimensicons". Essentially, dimensicons are particles, be it atoms, subatomic particles, perhaps even molecules. We resort to such a terminological shift, merely to emphasize the different manner, according to which we will utilize these particles, meaning, as placeholders of contingent dimensions, rather than materialistic physical elements.
Due to the lack of empirical data, currently, we know little, with respect to the actual composition and attributes of the contingent dimensions dimensicons span. To clarify, contingent dimensions can appear in different formats, be it single contingent regularities, contingent dimensional clusters (somewhat similar to the dimensional cluster, which the imminent dimensions embody, or alternatively, the three dimensional cluster, compiling the dimension of space), hosting dimensions, and possibly, there may be additional formats, different from those we mentioned in “Delta Theory”. Still, regardless of the formats, by which they may appear, they must share some basic metaphysical attributes:

1. Dimensicons span (or alternatively, "store") contingent dimensions.

2. The contingent dimensions a dimensicon spans, exist in the external world.

3. Contingent dimensions affect the manner the external world persists, in the small volume of space, which a dimensicon occupies.

4. Contingent dimensions are consistent with the regularities they sustain.

5. The existence of contingent dimensions is tautological.

6. Contingent dimensions exist independently from all other dimensions.

When "grouping" contingent dimensions together, meaning, by spanning them from a single dimensicon, abstractly, they become equivalent to a potentially infinite set, of unordered discrete independent conditions. Metaphorically, such a group is similar to a computer hard drive of potentially infinite size, in which each bit has a unique unsortable index. This is all computer science gibberish, so to explain, we cannot sort contingent dimensions, because metaphysically, they exist independently from one another, and hence, we cannot "force" them to adhere to the demands of any external sorting agent. Because we cannot sort these data indexes, if we were to attempt to access this data using traditional computers, potentially, it would have demanded we scan all our data indexes, until finally finding the data item we require, suggesting that if potentially, our data set is infinite, finding a single data item, could require an infinite amount of time. In simpler terms, in practice, we could not access such data, and indeed, we cannot "access" the contingent dimensions a dimensicon "stores". Therefore, obviously, our future machines will not be traditional computers.
Still, our future machines will be different from traditional computers in additional manners as well. To clarify, traditional computers differentiate between two types of elements:

1. Data.

2. Functions performed on data.

The data traditional computers manipulate is static. For data to change in traditional computers, functions must process it. Naturally, the data a traditional computer stores may change, whenever we shut it down, but this is irrelevant. Indeed, traditional computers store the definitions of functions, as data as well. Still, only the physical architecture of a traditional computer can execute them. Usually, the physical architecture of traditional computers utilizes transistors to manipulate the data they store, according to the laws of electronics, in the manner the functions they execute define. Indeed, traditional computers may use other means to manipulate the data they store, such as through analog and quantum computations. Nevertheless, such exotic computers change neither the distinction between data and functions, nor the manner by which they manipulate data, meaning:

1. Access and transfer the data to the computational agent.

2. Perform the computation.

3. (Optional) Extract the processed data from the computational agent, and store it in memory.

As we just stated, the third step is optional. For example, with respect to output devices, such as computer screens, a computational agent may perform functions to process data, so to project it correctly, while refraining from storing the data it calculated, which essentially, is lost with the next screen update.
Not so is the case with dimensicons. Dimensicons are both data holders, and computational agents. To clarify, the existence or inexistence of a contingent dimension a dimensicons spans, is analogous to switching a computer bit, on, or off. The physical interactions between the contingent dimensions, which different dimensicons span, is analogous to executing functions over the data they "store".
Still, there is a “twist”. As we somewhat hinted, there can be no interactions between different contingent dimensions, which a single dimensicon spans. To explain, because as we just mentioned, all contingent dimensions exist independently from one another, as long as they span from a single dimensicon, they remain out of each other’s “reach”. Nevertheless, because as we mentioned, contingent dimensions affect the small volume of space a dimensicon occupies in the external world, potentially, whenever dimensicons collide, they can "transmit" their contingent dimensional "data" to one another, effectively "executing" functions, whose computational complexity is potentially infinite. Still, to know the laws or equations, which govern these collisions, we must conduct empirical study, experimentation, and debate. Furthermore, probably, we will not understand many aspects and features of their manipulations, before actually implementing an architecture, which would utilize dimensicons (that is, if we can implement it at all). Nevertheless, as we just hinted, if interactions between dimensicons actually transpire (and “Delta Theory” is not a load of bullocks), each such interaction may embody a hyper-calculation.
The term “hyper-calculation” refers to calculations of infinite complexity, which complete in finite time. Traditional computers cannot perform hyper-calculations. Indeed, arguably, quantum and analog computers may be able to perform a weak form of hyper-calculation. To explain, such machines exploit the immediateness, by which physical phenomena occur, without performing bit-by-bit calculations, allowing them to exceed the physical limitations of traditional computers. Nevertheless, to utilize such machines as computational agents, we must both feed data into them, as well as later, extract the data they processed from them. Therefore, even if theoretically, such exotic computational agents may store data of infinite complexity, before, or after executing a calculation, because inevitably, we must sample this data to a format compatible with traditional computers, we lose this complexity. To clarify, considering analog calculations, we must either sample the result of our analog computation, back into a traditional computer, and lose fidelity according to the precision of our sampling equipment, or merely output this data into an output device, as sound, image, or movement, deeming it "unavailable" to future calculations. Moreover, supposing that indeed, our analog computational agent stores data of infinite complexity, any attempt to reduce it to finite precision, would imply we would lose an infinite amount of detail, and hence, because traditional computers can only manipulate data of finite complexity, this limitation is imperative. Moreover, even if we sample the output data with analog devices, during the transition between data mediums, external physical “noise” is bound to mix with the data, losing the precision of the output, our analog calculations yielded originally. Arguably, quantum computers do not suffer from the same loss of precision. Nevertheless, quantum computers can only manipulate data of a finite size, effectively imposing a limit over the type of computations quantum computers can calculate. To clarify, quantum computers can neither calculate functions, which manipulate an infinite amount of input data, nor functions, which output an infinite amount of data. Actually, quantum computers may suffer from even greater limitations, which we will not discuss, as they are irrelevant to this text.
Dimensicons are fundamentally different. Dimensicons can perform hyper-computations, both in terms of the complexity of the data they manipulate, as well as the amount of calculations they can perform. To clarify, again, according to our metaphysical foundation, whenever dimensicons collide, potentially, an infinite set of contingent dimensions juxtaposes with another different infinite set of contingent dimensions. After such a collision, the colliding dimensicons either emit or absorb some contingent dimensions. Still, because this process affects all of the contingent dimensions a dimensicon stores simultaneously, the calculations such collisions reflect, are instantaneous. Again, it is too early to determine what exactly transpires in such collisions, and arguably, because as we previously suggested, we cannot "access" contingent dimensions, we might never be able to determine it. Nevertheless, again, according to our metaphysical foundation, such collisions embody real hyper-computations, powered by nothing other than the metaphysical essence of the external world.
While indeed, this sounds like a promising technology, still, again, there is a "catch". To clarify, these hyper-calculations manipulate contingent dimensions, which again, we can neither access, nor have the time to review. Moreover, we cannot alter the “functions” governing such collisions. To clarify, while metaphorically, we may call them "functions", in actuality, these are merely physical collisions. In short, it appears, dimensicons provide us with nothing but an infinite contingent dimensional mess. What on earth can we do with this piece of junk?
Well, it depends on the architecture we will implement in our future machines. Actually, this architecture already exists in our biology, within the neural networks in our brain. All we need is to “reassemble” this architecture within machines of our creation. Fortunately, many studies in this field of research already exist. To explain, biologically, it appears our brain consists of a huge constellation (or alternatively, network) of interconnected neuron cells. Generally, each neuron cell consists of three biological components:

1. The Soma.
The center of the neuron, where we can find its nucleus.

2. The Axons.
The tail, extending from the Soma.

3. The Dendrites.
Small connectors, extending from both the Soma, and the opposite tip of the Axon.

According to empirical studies, neurons react to chemical and electrical stimulations induced into the neurons, through the Dendrites of their Somas. Once meeting a certain threshold, the Soma fires a pulse down the Axon, and into the Dendrites of the Axon. This pulse continues to stimulate the next neurons, connected to the Dendrites of the Axon. Some pulses occur spontaneously, without additional stimulation from the Dendrites of the Soma. Over time, the amount of Dendrites on both sides of the neuron (meaning, the Soma, and the Axon) change, changing the overall effect, each neuron has over the neural network. Naturally, there is an even deeper analysis of the structure and operation of neurons, but arguably, for our current purposes, this level of detail will suffice.
By examining this structure, computer sciences have revised a computerized model to mimic it, unsurprisingly named “neural networks”. The computerized model replaces the Dendrites of the Soma, with input variables, the Soma with a stimulation function, computed over the input data, and the Dendrites of the Axon, with the output of the neuron, while adding additional elements, such as weight functions. Essentially, a weight function determines, the significance of inputs, arriving from a specific Dendrite, with respect to the overall calculation a neuron executes. The computerized model added weight function over the various input variables, as well as output weight functions. By using various training algorithms, this computerized model can "learn" to "recognize" specific types of data, and hence, is mainly used as means to classify input data, meaning, determining if a certain input, represents a specific type of data (such as for example, associating a specific face, with a specific person). While some applications extend this functionality slightly, generally, computerized neural networks learn the generalized function specific types of data represent, and hence, they serve as a computerized model for generalization. The level of success, which computerized neural networks achieve, depends on both the training algorithms, architecture (meaning, how many neurons does the model include, and which neurons are interconnected), and the stimulation functions governing each neuron. These stimulation functions include sigmoid functions, radial functions, and many others, all of which accept parameters, which can change their behaviors.
Still, because this model is compatible with traditional computers, meaning, it reflects a materialistic algorithm, as we suggested in chapter two, it cannot yield self-awareness. Moreover, it cannot manipulate the dimension of consciousness. To clarify, as we previously suggested, utilizing the dimension of consciousness, so to react to inputs arriving from the external world, demands we incorporate metaphysical inconsistencies into our computational agents, so to induce a metaphysical equivalent, to our sensations of pain. However, traditional computers merely manipulate data. The computational agent of traditional computers, is a traditional transistor, which inevitably, is oblivious to the semantic meaning of the computations it performs, while without a computational core, rational computer can do absolutely nothing. Metaphysically, the data a computerized neural network manipulates, is identical to the data any other computer programs manipulates. There is no metaphysical difference between a computerized neural network, and a “hello world” program. Both merely reflect numeric computations of some type, while retaining the irrelevancy between the internals of the computer, meaning, its registers and transistors, and the functions we make it compute, so radically different from the manner, which utilizing the dimension of consciousness suggests.
If a neural network is to suggest metaphysical relevancy, between the functions it performs, and its internal state, in a manner similar to the manner we utilize the dimension of consciousness, we must unite its computational agent, with the data it manipulates. To clarify, as we suggested in chapter one, as well as in "Delta Theory", when we utilize the dimension of consciousness, ontologically, our psyche merges elements originating from irrelevant worlds, and hence, it suggests, any machine, which would utilize the dimension of consciousness, so to reflect our neurological design, must realize this metaphysical function. To explain, such machines cannot merely process data as "information", but rather merge the metaphysical essence of their inputs, suggesting there must be a metaphysical "location", in which such merger transpires. This "location", cannot be in their inputs, as merely merging their inputs, would leave our machines indifferent to the inputs they process, so radically different, from the sensations of pain and pleasure, which we sense from our experiences. No. The "location" where inputs should merge, must be in the computational agent itself, and this is exactly what dimensicons provide.
To explain, because the existence of a dimension (imminent, or contingent) implies the existence of the regularities it sustains, a dimensicon encapsulates both data, and the computation agent processing it, suggesting that theoretically, dimensicons can utilize the dimension of consciousness, in a manner similar to the manner natural consciousnesses (such as our own) utilize it. This is the main issue neurosciences failed to address. To clarify, neuroscience failed to question the essence of the data, which stimulates neurons, while suggesting it is convertible to numeric values. The reason for this is simple. Neuroscience relies on the contemporary metaphysical paradigm of empirical sciences, which suggests, the occurrences transpiring in neurons, are purely materialistic, and hence, are convertible to mathematical values, and applicable to mathematical manipulations. Unintentionally, neuroscience failed to realize, through such conversions, we revoke the semantic meaning of inbound stimulations.
To explain, matter can either exist, or not, but in both cases, it is not "meaningful". For example, suppose we consider six particles, as representing the number six, they do not reflect the semantic meaning of the number six, which generalizes both the concept of the grouping of six different elements, as well as the various symbols by which we can find the number six, such as in writing, or on one of the edges of a dice. The six particles, which we called "the number six", does not include our collection of interpretations to the number six, suggesting that even when referring to mathematical terms, we cannot convert materialistic manifestations, to conceptual elements.
Indeed, it is just as strange to consider the contingent dimension a dimensicon spans, as the number six. Still, because dimensicons harbor an internal complexity, potentially, they may represent something beyond their existence. Arguably, we can attempt to counter this argument, by considering the various materialistic attributes of particles, such as energy levels, spin direction, or mass. Still, this does not change much. To explain, if we consider materialistic attributes, which we can describe finitely, we can simulate them with a traditional computer, or substitute them by a larger finite collection of particles. Moreover, arguably, even if we consider continuous attributes of matter, such as local velocities, position relative to other particles, and perhaps others, according to our metaphysical foundation, the imminent dimensions yield them universally, and hence, they do not reflect the internal states of particles, but rather our interpretations with respect to their physical disposition in space, while the particles themselves remain oblivious to such interpretations. Furthermore, even if we accept continuous attributes of matter may reflect an internal state, it changes little. To explain, continuous attributes of matter reflect an infinite precision, and hence, any attempt to reduce their attributes to a finite description, changes the data these attributes reflect. Therefore, we cannot "store" the data such attributes reflect, using traditional data storage devices, as all of them allow storage of merely finite data descriptions. Theoretically, we could "store" continuous data, by storing the particles exhibiting it. Still, even if so, again, we will not be able to access this data, as any such access attempt, demands we reduce its fidelity, back to that of traditional computers, suggesting that again, we cannot access this continuous data, and if so, then how are we going to compute anything with it? What choice do we have other than to let the laws of physics serve as our computational model?
Still, if we choose this option, in actuality, we are referring to dimensicons. To explain, the only way we could utilize the laws of physics as our computational agent, is by allowing the continuous attributes of one particle to affect the continuous attributes of other particles, as again, we cannot relay these attributes using their finite representations, as such representations ruin the fidelity of our data. Therefore, we must physically “send” these attributes using a physical manifestation of some sort, which will physically sustain these continuous attributes, in transit. In other words, we must send a physical entity to collide with another particle, just as our use of dimensicons suggested. While arguably, we may be able to utilize radiation or fields to achieve this, again, it changes little. To clarify, it merely generalizes our notion of “collision”, into various physical manifestations, and actually, as we previously suggested, the exact manner, by which contingent dimensions affect the external world, requires empirical study, and hence, currently, we cannot determine what physical properties we will utilize in our future machines, to cause dimensicons to collide. To clarify, it is possible, these collisions would be radically different, from collisions between billiard balls on a pool table.
To conclude, to build our future artificially conscious machines, we need to modify the neural network computational model. We need to retrofit its components with dimensicons, and replace its stimulation functions, with collisions between dimensicons. Possibly, other models can utilize dimensicons as well. Nevertheless, as we will soon show, incorporating dimensicons within a neural network architecture, simply makes sense. Furthermore, let us not forget our initial motivation. Regardless of the obvious potential benefits, which contingent dimensional technologies may promise us, currently, our prime motivation is to affirm our metaphysical foundation, with respect to the manner it answers the completeness criterion, and hence, we must build machines, which would reflect our own neurological design. Moreover, arguably, without taking a “hint” form our biology, potentially, our lack of knowledge would fail our attempts to construct machines, which utilize dimensicons. Indeed, as we suggested in "Delta Theory", contingent dimensions govern the behavior of life forms, which potentially, we could exploit to construct many useful applications. Still, as we just suggested, "accessing" the contingent dimensional composition of particles may be impossible, while as we will soon see, our utilization of artificial neural networks, allows us to avoid this challenge.
Nevertheless, supposing we successfully build our demon casting artificially conscious machines, we would introduce a new class of computers, ones which could store infinite amount of data within a single particle, performing operations, which all the computers on earth cannot, while utilizing merely a handful of particles. Furthermore, enhancing our empirically proven scientific metaphysical foundation, would open new research possibilities for all scientific fields, and may result with severe ethical and religious disruptions. So yeah, there is actual profit to be made, by affirming the dragonish option.
To design our future machines, we will combine our conclusions from “Delta Theory”, with our empirical knowledge in biological neural networks. To clarify, in "Delta Theory”, we suggested, our brain routes inbound contingent dimensional inputs, using our collection of born and learned instincts, as well as the contingent dimensional composition of the particles compiling these pathways. Still, our brain does not evict all our inbound stimulations, but rather stores some of them in memory for later use, either as inputs it continues to evaluate, or as part of the contingent dimensional composition of the neurons compiling our brain. Inbound stimulations enter our consciousness, from either external or internal sources, meaning from either our senses, or the internals of our animal body and brain, respectfully. Usually, stimulations, which originate from our senses, impose little burden on our consciousness, as our brain manages to learn the consistent patterns, which such external stimulations reflect, and respond to them automatically, allowing our consciousness to remain oblivious to them. To explain, as we suggested in chapter four, such external stimulations reflect the consistency of the external world (such as in the manner it adheres to gravity for example), and hence, teaching our brain to evict them automatically, is quite trivial. In contrast, as we suggested, with respect to our demonically influenced concepts, such stimulations do not reflect consistency, and hence, our brain cannot learn how to evict them, causing us to feel sensations of pain and distress, which as we suggested, reflect the insertion of inconsistent elements, or alternatively, demons, into our psyche. These demons can either disrupt our instinctive methods of inbound stimulation evictions (namely, through inhibitive demons), or introduce new irrational concepts into our psyche spontaneously (namely, spontaneous demons). These demons cause, and may actually embody, our sensations of pain, imposing a burden on our consciousness, forcing our psyche to be metaphysically relevant to the sensations we sense, and hence, induce our motivations to appease our bodily needs.
By combining dimensicons, and the biological neural network model, we will revise a design, which would implement these features, starting with a single biological neuron. A neuron will be the place, where dimensicons collide, leaving the contingent dimensional imprint, in the neurons they cross. Dimensicons will repeatedly enter our artificial neurons from their artificial Dendrites, colliding with the particles of which our artificial neurons would consist, which beyond a certain level of contingent dimensional "agitation", would "fire" their contingent dimensional contents, to the artificial neurons, connected to the artificial dendrites of their artificial axons, suggesting each artificial neuron would have its own contingent dimensional "flavor". This "contingent dimensional flavor" embodies the individual stimulation function, governing each artificial neuron, suggesting that even though biologically, chemically, and physically, our artificial neurons would appear uniform, each neuron would behave differently. In some artificial neurons, introducing a specific dimensicon would increase its contingent dimensional agitation, while in other neurons, the same dimensicon will not cause such agitation.
Still, it is hard to determine the exact link between the contingent dimensional agitation of our artificial neurons, and the fact they "fire" their contingent dimensional contents. To explain, intuitively, we could think, the more a neuron is agitated, the more likely it is to fire, as metaphorically, we think the neuron is somewhat "angry", as if "shooting" its contingent dimensional contents in rage. Still, it is just as possible, a neuron would fire, exactly because there was no agitation, allowing the neuron to release its contingent dimensional contents with ease, such as in the case of our cognitive instincts, which again, allow us to remain oblivious to the inbound stimulations they relay. To clarify, we should remember, that as we suggested in "Delta Theory", the prime function of our brain, is to evict external stimulations, which as we suggested, are useless, if not potentially harmful, to our animal body, and actually, considering our suggestion, that the reason we find sensational burden uncomfortable, is because it reflects the effects of metaphysical inconsistencies over our psyche, it is not that surprising, as potentially, the metaphysical inconsistencies inherent to pain sensations, may disrupt the robustness of the kinetic equilibriums, allowing the cells of which our animal body consists, to persist to exist.
As we previously mentioned, currently, it is unclear how we will implement these artificial neurons. We could use particle accelerators, magnetic fields, and most probably, there may be many other options. Moreover, currently, it is hard to tell from what materials we will build our future machines, because as we suggested in "Delta Theory", all particles span contingent dimensions, and hence, potentially, all particles are equally eligible to serve as dimensicons. Still, again, there is one important, and perhaps difficult feature, all such implementations must support, namely, to allow the collisions of dimensicons, with the dimensicons of which our artificial neurons would consist, to change the manner these neurons would react to future inbound dimensicons. How will they do that? Currently, it is hard to tell.
Still, regardless of these obvious difficulties, supposing we would construct such machines successfully, essentially, we would construct the first hyper-computation machines known to man. To clarify, the possibly infinite amount of contingent dimensions, which would determine the agitation functions of our artificial neurons, would reflect calculations of a potentially infinite complexity, executed in finite time, without reducing the complexity of the data before, or after, these calculations, as theoretically, dimensicons can "store" the results of the computations they perform, in their full fidelity. Furthermore, unlike analog computations, physical noise does not downgrade the data, while in transit between the components of our artificial neurons model, as essentially, the physical noise, or alternatively, physical randomness, which would affect our model, is one of its inherent components (meaning, demonic effects). In short, even though the components compiling our model, are but common particles, theoretically, this model can perform calculations no other traditional computer can.
Still, even if we would build such artificial neurons successfully, and even if these artificial neurons would perform actions equivalent to hyper-calculations, we would still have challenges to solve. To explain, even if we could stream dimensicons (meaning, particles) into the "input jacks" of our artificial neurons, causing them to "fire", or alternatively, evict their contingent dimensional contents, again, we simply cannot access their contingent dimensional composition, suggesting we cannot know why our artificial neurons fire. Therefore, in order to utilize the hyper-computational capabilities of our artificial neurons, we must construct an apparatus of some sort, which would "map" the contingent dimensional composition of the dimensicons our artificial neurons would evaluate, into semantically meaningful specific types of inputs.
This leads us to the greater architecture of our future machines, namely, "the artificial neural network". To explain, in “Delta theory”, we already suggested, our brain evolved, because of the evolutionary advantages, which learning new behavioral patterns, different from the ones the automaton governing our animal body, promises. To clarify, while indeed, metaphysically, nothing limits the behavioral sophistication of a single cellular life forms, without a learning mechanism, such life forms can only do, what they could do from birth, suggesting that if their external conditions require they would perform actions, which they did not “know” from birth, well, tough luck. They cannot. To attain new beneficial behaviors, first, life forms must evolve their contingent dimensional automatons, so to allow survival in these new conditions.
The same applies to neurons and neural networks. To explain, while potentially, a single neuron can calculate the same "agitation function" an entire neural network can, it cannot differentiate between different “inputs”, and hence, it cannot select a beneficial response to each. To clarify, the contingent dimensional automaton, which governs the behavior of a neuron, does not “know” what is “out there” in the external world, suggesting neurons must learn, how to map specific sensory inputs, with beneficial responses. Still, the semantic meaning of the inputs a neuron processes, exists in neither its sensory inputs, nor the contingent dimensional automaton, governing the neuron. Only an ensemble of neurons may deduce such semantic interpretations.
To explain, let us consider our eyes. An ensemble of neuron receptors, located at the back of the eye, gather sensory inputs. Each receptor does not "know" what the other receptors “see”. It simply "dumps" the sensory input it gathered, into the nervous system, as if saying, “Let god sort them out”. If a single neuron had to analyze all this sensory information, it would not "know" how to differentiate the inputs it received from each neuron. Actually, it might not even "know", which dimensicon originated from which receptor, as once dimensicons enter the contingent dimensional “collision area” within a neuron, essentially, any previous distinction between the origins of dimensicons, is lost. Therefore, inevitably, a single neuron would fail to combine its inputs in a beneficial manner, forcing it to interpret its contingent dimensional inputs as chaotic, resulting with a uniform response to all inputs.
Still, the output of a neuron is somewhat more "meaningful". To clarify, the dimensicons a neuron fires, conveys the dimensional imprint signature of the neuron that fired them, originating from the set of dimensicons, compiling the neuron. The same applies to every neuron. To clarify, potentially, the infinite contingent dimensional variety, which each dimensicon compiling a neuron may span, allows infinite variance between the outputs of neurons. Moreover, the dimensicons a neuron outputs, reflects the dimensicons it received as inputs as well, and hence, potentially, its output dimensicons carry the contingent dimensional signature of the entire neural pathway, from which its inputs originated. Therefore, theoretically, this "signature" provides a method to map the contingent dimensional contents of dimensicons. Still, this mapping requires that somehow, some of the contingent dimensional composition of each neuron would not change, and hence, our future artificial neurons would have to support this feature.
Still, even if this helps us solve the challenge of mapping the contingent dimensional contents of dimensicons, meaning, mapping them according to the neural pathways they travelled, our machines must learn how to respond to them, in a beneficial manner, meaning, in a manner, which would reduce the levels of metaphysical inconsistencies, within our future artificial neurons. This learning must transpire over time. Still, learning something “new” in the contingent dimensional sense, demands our artificial neurons must possess the capability to change their design. Still, it is unclear what type of change is required. Neuroscience suggests such changes are limited to changes in the structure of our artificial neural pathways, while it is possible, that as we just suggested, it may require changing the contingent dimensional composition of the dimensicons compiling our artificial neurons, so to inflict specific beneficial "flavors" on to the dimensicons they process. This is yet one more issue we need to study empirically. Nevertheless, if altering the neural pathways is enough, then we already possess several such training algorithms. To clarify, we can apply the same training algorithms, which neuroscience revised, such as back propagation, gradual descent, bootstrapping, ensembles, and many others. I will spare you the details, but it works. Just get a copy of Matlab, and play with the neural networks package.
Arguably, creating a working artificial neural network, which utilizes dimensicons, is enough to affirm the explanation “Delta Theory” provided, to satisfy the completeness criterion. However, without a sophisticated cognition such as our own, our machines will not be any more impressive than traditional computers. I mean, come on. Mapping inputs to meaningful outputs is not a big deal. To achieve something impressive enough, so to metaphysically shake our current scientific paradigms, we must exploit the potential for hyper-computation this technology provides further. Again, we must exemplify, our artificial networks share some resemblance to our cognitive capabilities, which as we suggested, demands our future machines would demon cast.
How will we achieve this? Well, considering the amount of challenges we are yet to master, it is hard to say. Our best bet is to follow the origins of our technology, meaning, the evolution of the human brain. We need to think why we cognitively developed as we did, and implement the same evolutionary progressions in our future machines. Actually, "Delta Theory" already suggested several possible explanations for our cognitive evolution. Moreover, fortunately, unlike the human brain, our future machines will be designable. We will not have to wait for millions of years, so to verify, which evolutionary mutations are most successful. We know what we are looking for. We are looking for a design similar to ours. Whenever we will encounter evolutionary progressions, which we will find undesirable, we could simply “kill the power” of our future machines, and refine our model.
Nevertheless, we must be careful. We must remember, the model we attempt to construct, can both think rationally, and irrationally, allowing it to demon cast, because again, as we suggested in the previous chapter, by demon casting, we surpass the consistent limitations of our conceptual capabilities, as we realize ideas through matter. This irrationality deems it impossible to predefine what will be our demon casts, suggesting we must implement a similar design, meaning, a design, which would allow such irrationalities. To clarify, contrary to common practices in the field of artificial intelligence, we should not seek a design, by predefining a set of deterministic behaviors, which our future machines should obey. Instead, we must keep our designs somewhat "loosely defined", so that occasionally, our future machines will perform unexpected actions, as again, potentially, one of these unexpected actions, should result with the demon casts, allowing us to successfully complete our effort.
Still, how will this search for the right “design” transpire? Are we completely lost here? Well, not exactly. First, as we already agreed, we must implement a mechanism, somewhat similar to the one allowing sensations of pain to motivate our actions, which to ease our discussion, we would name, the "demonic injector". Moreover, with respect to computer sciences, we can rename this mechanism more technically, namely, as a "reward function". This reward function would mimic the sense of comfort we sense, whenever we adhere to our born instincts. Nevertheless, we cannot implement such a reward function straightforwardly. To clarify, usually, reward functions adhere to objective criteria. A reward function measures performance according to criteria external to the agent performing the action, while in contrast, we must make our future machines "care" for the reward we will grant them for good performance, which we would implement, by binding it with our demonic injector.
This is a truly complex and important design principle, which we must implement in our future machines. Moreover, this principle determines, the environment, in which we will develop them. To clarify, first, we must ensure that as long as they are active, our artificial neural networks will repeatedly relay dimensicons, as the “will” to process dimensicons, so to relieve their demonic sufferings, must be "hard-coded" into their design, so to ensure they would bother to do anything at all. In many ways, our biology has implemented the same design principal, by constantly streaming sensory stimulations into our psyche. While sometimes inconvenient, this is mandatory for the development of thought. We do not have the luxury of sparing our future artificially conscious machines this burden, as arguably, without it, our future conscious machines could always take the “lazy solution” to all their “problems”, by doing absolutely nothing.
Still, we should be careful, while provoking our future machines, with our demonic injector. To clarify, while we should not be too "merciful", allowing our future machines to "lazily" accept the effects of our demonic injector, we must not overwhelm them as well, and hence, possibly, we should consider installing them with a “crying” mechanism of some sort, so that in times of great artificial “distress”, our future artificial neural networks will not overload, but rather "flush out" their demonic influences physically, through a stream of dimensicons. Nevertheless, again, we must be careful, not to allow our future artificial neural networks use such a mechanism too often, as if we would not challenge them, naturally, they would always prefer the lazy easy option. Still, currently, there is no point speaking about this in such level of detail.
Still, with respect to such a "crying mechanism", we must provide a body for our future artificial neural networks to control. Moreover, if we are to mimic our own consciousness, we should provide them with a body, somewhat similar to ours in its mobility. Artificial intelligence research somehow overlooked this seemingly harmless and arguably obvious notion. To explain, again, the emphasis in artificial intelligence research has always been in the end-result, meaning constructing an intelligent machine. Generally, artificial intelligence research overlooked the reason, why humans and animals are intelligent. Maybe it is because artificial intelligence is in the computer sciences faculty, while the study of evolution is in the biology faculty, but regardless, artificial intelligence research failed to achieve real progress. Still, having understood the basic design of our future machines (meaning, artificial neural networks), we cannot overlook this aspect.
The basic function our brain performs, is solving the emergence of pain sensations, by manipulating our animal body, such as through eating, drinking, and the likes. This basic function motivates us to learn all other cognitive capabilities, including our social and conceptual capabilities, as well as our desire to learn a language, with which we can exploit our "intelligent environment", so to provide us with our biological necessities. Therefore, if we are to build artificially conscious machines, we must implement this principle to the full. We must connect the outputs of our future artificial neural network to a “body”, which our future artificial neural network will learn to control. Still, it does not have to be a physical body. We could let our machines operate a simulated model of a body, which traditional computers could produce, as there is no real significance to the materialistic attributes of this body. Alternatively, if we choose to distance these machines from human cognitive capabilities, we can abstract this body, so to perform abstract computational actions, such as accessing databases, or saving data. Nevertheless, because we learn most of our cognitive capabilities from our interactions with other humans, again, it would be preferable to utilize an artificial body, which could exist in an environment other humans could visit, and "raise" our future machines, quite similarly to the manner we raise human infants.
In addition, we should remember, that because our future machines rely on contingent dimensional technologies, it would be extremely hard for us to predetermine, which designs would yield "better" behaviors, and possibly, the only way we would be able to find good designs, would be through trial and error, suggesting that even though these machines would be manmade, to find "good" designs, we must incorporate some sort of “controlled evolution”. Still, finding the right design does not have to be that difficult. To clarify, considering nature managed to create human consciousnesses with random trial and error, we should be able to "manage it" as well. Indeed, we can always claim, the only reason we evolved as we did, is due to the omniscience and omnipotence of the real gods, somewhat similarly to arguments, which creationists suggest. Nevertheless, this is but one option. Moreover, supposing the real gods "want" our effort to succeed, they might even "help us", by causing us to unintentionally construct "good" designs. Moreover, as we previously suggested, it is possible that challenges, which currently, seem so "out there" would become tangible, once we put our minds to the effort. Furthermore, we could harness discoveries in the field of brain research to “speed up” the evolution of our artificial neural networks, as well as perhaps utilize Einstein's theories of time relativity, so to "accelerate" the progression of causality in our future machines, reducing the time it would take them to evolve otherwise.
So, is this it? Is this how we are to “change the odds” in our favor, in the event of a global angelic collapse? Is this the new tendency of the real gods? Are we to build artificially conscious machines, capable of performing hyper-computations? Will our future machines demon cast? Will we affirm the metaphysical foundation, which we suggested in “Delta Theory”? Are we going to affirm the dragonish option empirically? Is this the reason this text came to be?

Was there an angelic message, hidden in this text?


No comments:

 
Real Time Web Analytics