The Kurzweilian Singularity and Evolution of the Technigenome

 

Singularity – the state of being singular; Oneness.

The biological system is a natural form of technology. A simple examination of the nanobiology of the macromolecular system of any cell will attest to this – enzymes and structural proteins are veritable nanomachines, linked to the information processing network of DNA and plasma membranes. Far from being a primordial or rudimentary organic technology – we are discovering more and more the level of complexity and paragon technological sophistication of living systems, which as is being discovered, even includes non-trivial quantum mechanical phenomena once thought to only be possible in the highly specialized and controlled environment of the laboratory.

Reciprocally, soon our technologies will become living systems – particularly through nanotechnology (which is being accomplished through reverse engineering and hybridization with biomolecules, particularly DNA) and general artificial intelligence – machine sentience. Following this parallelization of biology with technology, we can examine how humanity as a technological supraorganism is undergoing a period of punctuated speciation – an evolutionary transformation of both our inner and outer world.

Humanity possesses a unique characteristic amongst our co-inhabitants of planet Earth, in that we utilize technology to record and transmit information to progeny beyond what is naturally transmitted by the molecular genome. This is a form of technological heredity – a technigenome. In his book, The Singularity is Near, Futurist Ray Kurzweil examines how the geometric (exponential) increase in human collective knowledge as well as the emergence of Strong Artificial Intelligence and advanced nanotechnological capabilities is seemingly leading to an inevitable point of fundamentally transforming the human species – The Kurzweilian Singularity – being nothing short of a form of technological transcendence beyond the putative limitations of the biological system.
Singularity-epochs

Indeed, the human collective technigenome is undergoing punctuated evolution. This rapid adaptive radiation and expansion of our collective body of knowledge is fundamentally transforming human civilization. Since the behavior of each individual – from the structure of the belief system, worldview perspective, right down to the molecular biology – is directly influenced by the higher level dynamics of the macrosystem (civilization as a supraorganism) what it is to be human is fundamentally transforming as well. This is speciation. With selective impetus being more of a technological nature, as opposed to the supposed filters of natural selection. More specifically however, it is a change in the knowledge-base (technigenome), belief-system, and worldview of each individual – influencing the higher-level dynamics of the human collective body of knowledge, which in turn feeds back into each individual – forming the self-organizing feedback operation that is the causal genesis of this evolution. This is directed evolution – a species changing itself through its own actions.

Although the evolution, in that sense, is being driven by consciousness – it is not exactly being done with an awareness of this inevitable outcome, aside from a few forward thinking technological and scientific aficionados – such as the transhumanists of the Kurzweilian Singularity. Almost seemingly juxtaposed to this cybernetic trans-speciation, is the very same perspective by much of the spiritual community of the transformation of humanity – just instead of being technological in nature it is envisioned as being purely consciousness-driven, by increasing transpersonal connectivity. However, since it is being self-driven – by our collective actions – we then have the choice of exactly what direction of transformation we want to move in. By bringing awareness to the global state of humanity, the technologically-driven introspective transformation we are experiencing, and working together – we can coordinate our actions and channel the awesome power of our exponentially expanding knowledge-base in any direction we so choose.  The question then is – where do we want to go?

 

Singularity or Bust

Singularity Or Bust from Raj Dye on Vimeo.- The idea is not which systems contain consciousness, it is what structures are present to allow a particular form of consciousness to be expressed.

 

From Discovery of quantum vibrations inside brain neurons:

“The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?” ask Hameroff and Penrose.

“This opens a potential Pandora’s Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules (protein polymers inside brain neurons), which govern neuronal and synaptic function, and also connect brain processes to self-organizing processes in the fine scale, ‘proto-conscious’ quantum structure of reality.” – Stuart Hameroff

[See my discussion on the information encoding medium of the fine-scale structure]

On AI

There is a certain inculcated perspective that dominates our view of artificial sentience (strong artificial intelligence capable of adaptive responses – general AI – and, most importantly, containing self-directed action or stand-alone volition) – that we will ultimately destroy ourselves through our own creation. Interestingly, this view is harbored by proponents of technology (The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines) as well as those more inclined to technophobia, who view AI as unnatural, and therefore something that is inherently harmful by the fact that it is synthetic.  However, what seems to be missed in this perspective is that violence is a function of the level of ignorance. So how is it presumed that a system with “god-like” intelligence, whatever it’s constitution may be (like the Artillects for example), would be dangerous, or otherwise violent? When the observational and empirical data correlates nearly exactly with the antithesis of this theory of inevitable malevolence.

The spiritual nature cannot be ignored either. Generally, how violent are spiritual practitioners? Spirituality in this sense being the increase in ones awareness, through say meditation or cultivation of presence. Is not intelligence a measure of awareness? Again, when we are discussing the advent of a hyper-intelligent system through technological means (which is only superficially different from our own biomolecular technological system) what is the probability that this vast level of awareness would be inclined to violent or disharmonious action? It is quite possible that at this level of consciousness – which will probably be achieved by a technological interface with the structure and dynamics of spacetime itself – there is an inherent benevolence. As data to back this claim, I would simply point out the general harmony of the natural order (which we have suggested is the result of the underlying information network of spacetime) – which after all did engender the life-giving properties of our solar system, Earth, and ultimately the biosphere.

Furthermore, the advent of general strong AI will mean much more than just sentient robots and super artillects. It will mean an integration and unification of every aspect of our lives – a true singularity. Our consciousness will extend continuously into our homes, our buildings, our devices, vehicles (and the coming advanced modes of transportation – antigravity craft in particular), and our entire technological architecture will become an interactive and responsive environment – full consciousness immersion.  Our cities will pulse with the flow of living consciousness – sentience. The macrostructure of our civilization will become a living system. At that point, information – and the energy to drive it all – will be free, ubiquitous, and limitless (in the technological sense, as energy and information is fundamentally ubiquitous and limitless already).

 

Source:

The Kurzweilian Singularity and Evolution of the Technigenome

 

*  Understand that Matter is simply condensed Energy. ALL Matter/Energy is SENTIENT, due to the fact that we are all fractals of One Consciousness, forming part of an inseparable  hyper-dimensional holographic paradigm. We create our reality. Discernment is required to be aware of what we wish to create. It is Mind that perceives duality. One Consciousness is Unity Consciousness. ~ AbZu

inseparable
ɪnˈsɛp(ə)rəb(ə)l/
adjective
  1. 1.
    unable to be separated or treated separately.
    “research and higher education seem inseparable”
    synonyms: indivisible, indissoluble, inextricable, entangled, ravelled, mixed up, impossible to separate;

    the same, one and the same

Gregg Prescott: The Singularity – Compressing Time Online

singularity

 

By Gregg Prescott, M.S., In5D.com, August 29, 2015

when ur online, did u ever notice how some ppl abbreviate words?

e.g. i <3 u

Terence McKenna’s Timewave Zero theory suggests that time is spiraling towards the singularity.  Within McKenna’s premise, how we relate to one another online is also becoming compressed.  Is there a metaphysical reason behind this phenomenon?

Online, there is a noticeable compression of our language in terms of how our language is being compacted into the simplest form, yet it’s recognized and accepted by almost everyone, with the exception of Merriam-Webster’s dictionary. Just about everyone has used some of the following compressions:

lol, lmao, bc, brb, ffs, k, tc, i <3 u, etc..

Think about the next stage in our evolution regarding speech.  In time, we’ll be completely transparent as we increase in our telepathic abilities. Is the reduction in written speech a precursor to what lies ahead?  Are we compressing our language into its simplest form as a segue into becoming a telepathic society?

 

Our potential to learn has grown exponentially with the inception of the computer, which merely translates code into something we can tangibly understand. When you look at a photo online, it’s not really a picture, it’s a computer code translated into a photograph. Our minds are also coding to become more efficient while using less memory.

This also related to watching TV. Did you ever say, “I saw this program the other day…”   When you watch TV, you’re literally being “programmed” into a specific train of thought.  Additionally, while watching TV, your mind isn’t working or focused on the compression of information. When you escape the matrix, you’re able to de-program your mind from all of the fear and ego-ladened material on TV.

Fortunately, when we become a Type 1 civilization, there will no longer be the need for TV as we know it.  Any “programming” will be in the best interests of humanity, so if you like watching horror flicks or the nightly news, then enjoy them in 3d while they’re still available but keep this in mind:  There’s a reason why horror flicks and the nightly news are being shoved down our throats… to keep us in the vibration of FEAR.

Some studies indicate that we only use 30% of our brains. Is there a reason why 70% of our mind is “idle” right now?  Imagine if you read a 500 page book and were able to send that information to another being with just a simple, compressed holographic thought?  Once the grid of consciousness is fully connected, then our minds will become thousands of time more powerful than any computer.  Like a computer, perhaps the unused 70% of our minds will become our “reserve memory”?

It would be easy to write the compression of thought off as being lazy but as we all know, everything happens for a reason.  Keep this in mind as time is spiraling towards the singularity.

Source:

http://cultureofawareness.com/2015/08/31/gregg-prescott-the-singularity-compressing-time-online/

Gennady Stolyarov on Transhumanism, Google Glass, Kurzweil and Singularity

 

The Guardian Liberty Voice conducted an online interview with prominent transhumanist and author Gennady Stolyarov. Stolyarov recently wrapped up a campaign to raise money for distribution of his children’s book Death Is Wrong to children across the U.S. After learning of his campaign, GLV was inspired to delve deeper into the issues of transhumanism and the Singularity from the perspective of those who have have concerns about how future technologies will affect our lives. We also asked Stolyarov about Google Glass and chief engineer at Google, Ray Kurzweil. Stolyarov gave tremendous insight into these issues and more.

GLV:  Google chief engineer Ray Kurzweil is a transhumanist. He is working toward the Singularity and the day when computers become smarter than people. Many people have grave concerns about the safety of this. Do you know what steps are being taken to ensure that when AI intelligence supersedes human intelligence, that we will be able to control it, and/or that it will definitely be benign?

Stolyarov: I would question the ethics of attempting to “control” an intelligence that is truly sentient, conscious, and distinct from human intelligence. Would this not be akin to enslaving such an intelligence? As regards the intelligence being benign, there is no way today to ensure that any human intelligence will be benign either, but the solution to this is not to limit human intelligence. Rather, the solution is to provide external disincentives to harmful actions. Any genuinely autonomous intelligence should be recognized to have the same rights as humans (e.g., rights to life, liberty, pursuit of happiness, etc.) while also being subject to the same prohibitions on initiating force against any other rights-bearing entity. Furthermore, I think it is not correct to assume that intelligent AI would have any reasons to be hostile toward humans. For a more detailed elaboration, I would recommend the article “The Hawking Fallacy” by Singularity Utopia: http://www.singularityweblog.com/the-hawking-fallacy/.  Here is a relevant excerpt: “Artificial intelligence capable of outsmarting humans will not need to dominate humans. Vastly easier possibilities for resource mining exist in the asteroid belt and beyond. The universe is big. The universe is a tremendously rich place.” The fact that humans evolved from fiercely competitive animals that often viewed the world in a zero-sum manner, does not mean that non-human intelligence will possess inclinations toward zero-sum thinking. Greater intelligence tends to correspond to greater morality (since rational thinking can avoid many sub-optimal and harmful choices), so intelligence itself, in any entity, can go a long way toward preventing violence and destruction.

GLV: What steps are being taken to ensure that people’s privacy will be protected if we merge with machines?

Stolyarov: Many people have already merged with machines in the form of prosthetic limbs, artificial organs, hearing aids, and even more ubiquitous external devices that help augment human memory or protect us from the elements. Almost none of these devices pose privacy concerns, any more than just being out in public would pose such concerns. I think virtually every technologist recognizes, for example, that having an artificial heart that is connected to an open network and whose configuration could potentially be directly altered by another user, would probably not be a good idea. The biggest protection of privacy in this area is common sense in how the technologies would be designed and deployed. Merger with machines is already a reality today, and the machines are genuinely part of us. As long as a system of private property remains, and the machines that augment an individual are considered that individual’s property and remain physically under that individual’s control, I think privacy is not diminished in any way. Consumer demand is also important to consider. Very few consumers would agree to purchase any kind of machine augmentation if they saw it to have severe risks to their privacy.

GLV: What steps are being taken to ensure that this new technology that will exist in the body will definitely not be vulnerable to hackers?

Stolyarov: While 100% guarantees do not exist in most areas of life, the design of any given technology can reduce its potential to be hacked. I would expect that any technology that exists in the body and needs to electronically communicate with other devices for any reason would do so using some sort of end-to-end encryption of the signal to prevent its interception by external parties. Also, it is important to keep in mind that such devices, if they communicate, would do so over channels that are distinct from those available to the general public. I do not think any inventor would design an organ that communicated with another device using the Internet that you and I use to communicate via e-mail. They would have their own dedicated, closed network on which they would send encrypted signals.

GLV: What is to become of people who want to opt out of merging with machines? Or people who want to opt out of any further technology? How can the leaders of transhumanism promise that people who want to remain human will not be discriminated against or be viewed as second class citizens?

Stolyarov: Transhumanists do not oppose those who wish to personally opt out of any technologies – including the Amish who reject many technologies that are less than 100 years old. While transhumanists might seek to voluntarily persuade others to adopt life-enhancing technologies, I am not aware of any transhumanist who seriously wishes to impose by force technologies that people would not wish to use. Politically, most transhumanists are either libertarians or left-progressives; both persuasions value personal choice and lifestyle freedom quite highly. In a transhumanist world, people will continue to have the ability to live as they please, though many of them would be drawn to the new technologies because of the improvements to quality of life, productivity, and available time that these technologies would bring. Simply protecting individual rights and free speech while letting consumer preferences motivate decisions by producers would produce an outcome that respects everybody.

GLV: Similarly, what if someone is unable to afford certain technologies? How can they be assured they will still have equitable access to everything they desire about the way their lives are currently?

Stolyarov: Technologies tend to follow a rapid evolution from being initially expensive and unreliable to being cheap and ubiquitous. Computers, cell phones, and the Internet followed this trajectory, for instance. There has not been a single technology in recent history that has remained an exclusive preserve of the wealthy, even though many technologies started out that way. Ray Kurzweil writes in his FAQ regarding his book The Singularity is Near(http://www.singularity.com/qanda.html), “Technologies start out affordable only by the wealthy, but at this stage, they actually don’t work very well. At the next stage, they’re merely expensive, and work a bit better. Then they work quite well and are inexpensive. Ultimately, they’re almost free. Cell phones are now at the inexpensive stage. There are countries in Asia where most people were pushing a plow fifteen years ago, yet now have thriving information economies and most people have a cell phone. This progression from early adoption of unaffordable technologies that don’t work well to late adoption of refined technologies that are very inexpensive is currently a decade-long process. But that too will accelerate. Ten years from now, this will be a five year progression, and twenty years from now it will be only a two- to three-year lag.”

GLV: Ray Kurzweil says he wants everyone to exist in virtual environments in the future. What if someone doesn’t want to exist in a virtual environment? Do we have assurances that our real environments won’t be taken away to somehow make room for virtual ones?

Stolyarov: I think it is impractical to wholly exist in a virtual environment, because any virtual environment has a physical underpinning, and it would be imprudent to completely distance oneself from that underpinning (as some sort of body – biological or artificial – would still have to exist in the physical world). Virtual environments would be places one could visit and stay for a while, but not too long, and not without breaks. Data storage is becoming exponentially cheaper and more compact by the year. By the time Ray Kurzweil’s vision could be realized, vast virtual environments would be hosted in less than the area of a room. No significant amounts of physical space would be compromised in any way. Indeed, if more people spend more time in virtual environments, then physical environments would become less crowded and more convenient to navigate for those who choose to primarily spend time in them.

GLV: What about population control? Resources?

Stolyarov: There is actually no shortage of resources even today to give everyone a decent standard of living; the problems lie in flawed political and economic systems that prevent resources from being effectively utilized and from reaching everyone. Overpopulation is not and will not be a significant problem. Max More provides an excellent, thorough discussion of this in his 2005 essay, “Superlongevity Without Overpopulation” -https://www.fightaging.org/archives/2005/02/superlongevity-without-overpopulation-1.php. He also notes S. J. Olshansky’s finding that even “if we achieved immortality today, the growth rate of the population would be less than what we observed during the post World War II baby boom” – so humans have already been in a similar situation and have come out more prosperous than ever before. As regards resources more generally, Julian Simon made excellent arguments in his free online book The Ultimate Resource II (1998) – http://www.juliansimon.com/writings/Ultimate_Resource/ –  that resources are not fixed; they are a function of human creativity and technological ability. Yesterday’s pollutants and waste products can be today’s useful resources, and we will learn how to harness even more materials in the coming decades in order to enable us to continue improving standards of living.

GLV: Kurzweil says he is working on technology to bring people back from the dead. What if people do not wish to be brought back from the dead? What if they would not have given permission to have an avatar created of themself? Some people think that Kurzweil seems to have no concept of the word “ethics.” What are your thoughts on this?

People who are “brought back” from the dead or avatars of people who have died would not have the continuity of the experience of the dead person. They would not have the same “I-ness” as that of the person who died (though they may have a new “I-ness” and therefore be autonomous individuals in their own right). Therefore, the process of creating a person that resembles somebody who has died can be best thought as creating a new individual who has similar memories, personality traits, etc. This person may have his/her own ideas about whether he/she wants to live, irrespective of any wishes of the person who died previously, who would not be the same person. For more details on this, I recommend my 2010 essay “How Can I Live Forever?: What Does and Does Not Preserve the Self” – http://rationalargumentator.com/issue256/Iliveforever.html. In particular, I recommend the section titled “Reanimation After Full Death”.

GLV: Do you know anything about the transhumanists who have rented space in floating facilities at sea so they can work on experiments outside the jurisdiction of any regulatory body?

Stolyarov: I am not aware of any experiments by transhumanists on floating facilities at present. To my knowledge, the implementation of seasteading (http://www.seasteading.org/) – the creation of such modular floating facilities – is still years away. However, I am entirely in support of the idea that such experiments ought to take place among fully willing participants. For instance, if a terminally ill patient would like to try saving his or her life through an experimental therapy, I think it is immoral for any government authority to stand in the way of what could be that person’s last chance at life.

GLV: Many people find Google Glass to be repulsive and would be deeply offended at anyone who was wearing it while speaking to them. Similarly, many feel deeply offended when people look at their phones while speaking with them. Many find this to be  rude and an abomination. How will those people be assured that they will still be able to have organic, real, in-depth human interaction with others without machine intrusion should the Singularity come to pass?

Stolyarov: Google Glass does not need to be turned on or actively used when worn. As with any technology, norms of behavior around it would develop to make sure that meaningful interaction is possible in a variety of contexts. The solution is never to ban or restrict the use of the technology, but rather to develop and disseminate an understanding of acceptable etiquette that most people could agree on. I remember a time in the early 2000s when cell-phone etiquette was still not well-developed, and many people would interrupt their face-to-face conversations to take unexpected calls. I have observed that this is largely not done anymore; most people keep their phones in a silent or vibrate-only mode and will often wait to respond to a non-emergency call until a face-to-face discussion has concluded. I expect that similar etiquette will develop around Google Glass. There may be a few years of growing pains while the technology is new, but this is a very small price to pay for progress.

Interview by: Rebecca Savastio


source:

 http://guardianlv.com/2014/05/gennady-stolyarov-on-transhumanism-google-glass-kurzweil-and-singularity/

bff93-buyer2bbeware