This page provides a complete index of every word that is currently defined in The Astrology Dictionary.
This page provides a complete index of every word that is currently defined in The Astrology Dictionary.
Christianity, Jesus Christ and Virgin Mary are very highly revered in Islam , the divine provenance of Christianity, the prophetic mission of Jesus and his virgin birth from Mary, all are enshrined in both the Holy Qoran ,the Hadith and Moslem tradition!
More so than in many ” Christian”Churches!
The Vatican speaks for the largest Christian community, it’s message of peace and justice is presumed to be universal, it’s compassion boundless and indiscriminate .
It avowed several times that it has no quarrel,with Islam …..more as a public relations ploy as unfortunately it turned out to be as in:
– the resigned? pope Heidelberg speech which was never amply clarified as made by the Pope in his official capacity thus speaking for the Vatican and Catholic Christianity or ,in the hitherto novel for a Pope in office , for the Pope personally !
-the present Pope seems to avidly seek a universal standing ,by standing , presumably , for ALL the aggrieved and less endowed of all colors and faiths Except for the Palestinians that he never mentions!
I fail to see a clear stand by the Protestant church( s ) except for the major mainline churches whose opposition to the conquest of Iraq Was duly noted and hugely appreciated.
Beside their seminal love of Judaism that we are aware of …where do they stand re the Palestinian cause: state, people and refugees???
The answer is important in that the cumulative Christian stand will,determine many things of a universal import!
Peter, let me unpack this for you.
IMHO, many of these co-called believers are not credible representatives of Jesus, and never were.
They’re believers in an illusory form of Christianity associated with Salvation Theology. Think of it as magical thinking Christianity. So long as these adherents profess to “love” and “believe in” Jesus, they’re taught that they’re saved. (I know that Paul said they would be, but he also thought the end of the world was imminent, and I tend not to treat that type of thinker as a credible authority.) Thus, it doesn’t matter to these “Christians” if they follow a devil like Trump, who admits that he has never asked God for forgiveness – and, seriously, how many opportunities has life created Trump to ask God, or his fellow man, or the people he swindled at his bogus University, or his ex-wives for forgiveness?
Moreover, as we approach the 500th anniversary of The Reformation, let me suggest that it might be time for self-identified Christians to re-evaluate what it even means to be a Christian. Does it mean adhering to magical thinking – “I said the magic words, and therefore I’m saved and you’re not” (what kind of defensible deity would reward that) – or would it require making Jesus’ actual philosophy of love, forgiveness, and non-violence the central precepts in one’s approach to life?
As a heretic who prefers to unpack the living principles at the heart of every religious tradition, let me strongly suggest the latter.
Karl Marx, Parijse Manuscripten (1844)
‘Zij heeft de persoonlijke waardigheid in de ruilwaarde opgelost, en in de plaats der talloze verleende en verworven vrijheden als enige vrijheid de gewetenloze handelsvrijheid gesteld’ (Marx en Engels in het Communistisch Manifest uit 1848 ).
Weinig denkers hebben zo’n grote maatschappelijke en politieke invloed gehad als Marx. Hij was de hoop van de arbeidersklasse en de schrik van ondernemers. Marx beweerde dat de mens in een kapitalistische samenleving vervreemdt. Arbeiders leggen hun kracht en vermogen buiten zichzelf (in producten) in plaats van in zichzelf, net als religie dit doet. Zijn vroege denken wordt vaak als humanistisch aangemerkt.
Shell foundation doet goed werk en pakt zaken juist aan mbt globale ontwikkeling
Zonder Koningshuis had het een waar zooitje geworden….
doch er moet nu verandering komen, of een gesprek met de Koning om constructief bij te praten alles wat niet bekend is bij hem en de politieke structuur,
Toen mevrouw haar afgelegde eed benadrukte die artsen dienen af te leggen en zich volledig aan moeten conformeren, toen knapte er iets bij mij….. Jakob (Zie onderaan)
‘Cancer Screening Has Never Saved Lives’ BMJ
Tuesday, January 12th 2016 at 3:45 am
Sayer Ji, Founder
Millions have marched for “cancer causes.” Millions more have
been diagnosed “early” and now believe screening saved their
lives. But a new study confirms something we have been reporting
on since our inception: In most cases, screening not only has not
“saved lives,” but actually increases your risk of dying.
An extremely important new study published in the British Medical Journal titled,
” Why cancer screening has never been shown to “save lives”—and what
we can do about it , ” confirms something we have been reporting upon at
GreenMedInfo.com since our inception, namely, cancer screening has not lived
up to its long held promise of “saving lives” because diseasespecific
in mortality do not equate to reductions in overall mortality. Worse, in some
cases overall mortality actually increased because of screening.
In the new study, Vinay Prasad and colleagues, argue that the real benchmark
for the success of any cancer screening program is if the “early stage” cancers
being diagnosed and treated actually result in a reduction in the overall
For instance, we have reported extensively on the widespread
misclassification of ductal carcinoma in situ (DCIS) as a bono fide
malignant cancer, as well as its epidemic level overdiagnosis and
overtreatment . Tens of fhousands of women are diagnosed each year with
“early stage breast cancers,” even though the National Cancer
Institute itself acknowledges it should be classified as a benign or indolent
lesions of epithelial origin. The New England Journal of Medicine published a
study in 2012 shows that approximately 1.3 million women were diagnosed
with DCIS in the past 30 years, with most receiving either mastectomy,
lumpectomy, radiation, chemotherapy, or some combination thereof. Ironically,
many of these women ardently believe that their lives were “saved” by the
screening and treatment, succumbing to the biomedical equivalent of Stockholm
syndrome where identifying with the ‘aggressor’ becomes palliative. In reality,
most suffered irreparable harm not from the “cancer,” but from both the
psychological and physical effects of being wrongly diagnosed and treated. If
the end point were not breast cancer specific mortality (‘invasive’ breast cancer
has not declined but increased with screening, indicating overdiagnosis ),
but overall mortality, it is likely that these DCIS diagnosed women’s lives were
significantly truncated because of screening programs; at the very least, the
quality of their lives would have been significantly negatively impacted.
Much of the damage, pain, and suffering associated with overmedicalization
could have been avoided if public health advocates and private industry
promoters of screening programs had realized that reducing the risk of cancer in
one bodily location the
breast, the colon, the lung, the thyroid does
necessarily translate into a reduction in mortality risk everywhere else. It is this
ignorance which drives the many
campaigns, like the heavily
pinkwashed “Breast Cancer Awareness” campaign, which increasingly the public
is acknowledging to be a highly unethical moneymaking
The article summarizes the problem associated with confusing disease specific
with overall mortality reduction, succinctly:
Despite growing appreciation of the harms of cancer screening, 1 2 3
advocates still claim that it “saves lives.” 4 This assertion rests, however,
on reductions in disease specific mortality rather than overall mortality.
Using disease specific mortality as a proxy for overall mortality deprives
people of information about their chief concern: reducing their risk of
dying. 5 6 Although some people may have personal reasons for wanting
to avoid a specific diagnosis, the burden falls on providers to provide
clear information about both disease specific and overall mortality and
to ensure that the overall goal of healthcare—to improve quantity and
quality of life—is not undermined. 7
In this article we argue that overall mortality should be the benchmark
against which screening is judged and discuss how to improve the
evidence upon which screening rests.”
And so, without the proper benchmark or end point, all the educational and
efforts going towards “reducing deaths” or “saving lives” from
breast, prostate, lung, skin, brain, [insert body part], become misleading, if not
overtly propagandist in nature.
Indeed, the extant scientific evidence itself reveals that at best the present
disease specific agenda for “cancer prevention” is pseudoscientific.
section of the study subtitled, “Why cancer screening might not reduce overall
mortality,” the authors summarize what the literature reveals on the topic:
Discrepancies between disease specific and overall mortality were found
in direction or magnitude in seven of 12 randomised trials of cancer
screening.8 Despite reductions in disease specific mortality in the
majority of studies, overall mortality was unchanged or increased. In
cases where both mortality rates were reduced the improvement was
larger in overall mortality than in disease specific mortality. This
suggests an imbalance in nondisease
specific deaths, which warrants
examination and explanation. A systematic review of metaanalyses
cancer screening trials found that three of 10 (33%) showed reductions
in disease specific mortality and that none showed reductions in overall
The implications of this are profound.
As we reported previously with Anjelina Jolie’s decision to have her breasts
and ovaries prophylactically removed , ostensibly to “reduce her risk of
dying,” removing healthy body parts to prevent diseasespecific
unlikely to reduce the overall risk of dying. And yet, the “Jolie effect” is a well
established phenomena. Her decision was lauded the world over as courageous
and an “evidencebased”
precautionary step, with tens of thousands of women
(and some men) following suit. We hope the new BMJ study raises a flag of true
caution for those who may habitually and uncritically follow the celebritycentric
The significant harms of screening overdiagnosis and overtreatment extend to
men as well. For instance, aggressive prostate screening programs over the past
few decades have resulted in the removal and/or irradiation of millions of men’s
prostates. A 2004 study found that an astounding 200,000 men are being
diagnosed annually with prostate cancer. 1 Tragically, the 2013 National Cancer
Institute report referenced above also found that socalled
prostate cancer,” high grade intraepithelial prostatic neoplasia (HGPIN), is also
essentially a benign lesion within prostatic epithelial tissue, not unlike DCIS in
women’s breasts. In other words, millions of men were diagnosed with a
potentially lethal “precancer”
or “early stage cancer” they never had.
As an aside, it should be noted that even in the case of lesions of true concern
for malignancy, there is always hope. Cancer is not an inexorably lethal, genetic
process that happens in an environmental, nutritional, and
emotional vacuum. Instead of viewing it as the biological
equivalent of a terrorist, and cutting, burning, and poisoning the target tissue
(and, collaterally, the entire body of the host), we need to abandon the warfare
model of allopathic medicine and adopt one that focuses on targeting cancer
stem cells in nontoxic
ways , looking at carcinogenesis through the lens of
the informational dysregulation of genetic and epigenetic pathways in the cell;
informational “disease” in contradistinction to physiochemicallybased
of course, more prone to being reversed. Cancer, in this view, can be halted in
its tracks, and even regressed, assuming that, along with informational
corrections (e.g. “nanopharmacological” approaches like homeopathy, “energy
healing,” high quality food ( which is also informationcontaining
tumor microenvironment can be adjusted back to healthier conditions through
detoxification, lifestyle modifications, mindbody
interventions, and targeted,
“high dose” nutritional support.
‘Cancer Screening Has Never Saved Lives’ BMJ
Tuesday, January 12th 2016 at 3:45 am
Sayer Ji, Founder
The new study explained how prostate screening programs have created “off
target” deaths, primarily through the high rate of false positives, overdiagnosis
cancers (e.g. HGPIN), and detection of incidental findings (i.e.
unintentionally discovered conditions):
For example, prostate specific antigen (PSA) testing yields numerous false
positive results, which contribute to over one million prostate biopsies a
year. 12 Prostate biopsies are associated with serious harms, including
admission to hospital and death. 12 13 Moreover, men diagnosed with
prostate cancer are more likely to have a heart attack or commit suicide in
the year after diagnosis or to die of complications of treatment for cancers
that may never have caused symptoms. 12 13
prostate screening has been found to have a false
positive rate of about 75%. 2 Obviously, given this finding, there is nothing
specific at all about the prostate “specific” antigen test, which is why the United
States Preventive Services Task Force now strongly recommends against it .
How the Public Is Misled Into Believing “Screening
As we have explored in previous writings, such as ” The Dark Side of Breast
Cancer Awareness Month ” and ” A DIRE WARNING: The Cancer Industry
Owns The Media And Your Mind ,” the public is intentionally misled into
believing a priori cancer screening saves lives even when no real, independent
scientific evidence exists to support it.
The new study reveals just how truly inflated the public’s expectations have
A systematic review has shown that the public has an inflated sense of the
benefits and discounted sense of the harms of mammography screening,
the cervical smear test, and PSA screening. In one study 68% of women
thought that mammography would lower their risk of getting breast cancer,
62% thought that screening at least halved the rate of breast cancer, and
75% thought that 10 years of screening would prevent 10 breast cancer
deaths per 1000 women. Even the most optimistic estimates of screening
do not approach these numbers. The most recent Cochrane review of
randomised controlled trials of PSA screening failed to show a reduction in
disease specific death. The Cochrane review of mammography did not
show reduced breast cancer deaths when adequately randomised trials
Advocates of screening have emphasised its benefits, sometimes
verging on fear mongering. Others, including us, think that shared
decision making should be the focus. But as long as we are unsure of
the mortality benefits of screening we cannot provide people with the
information they need to make an informed choice. We must be honest
about this uncertainty.
A summary of the Swiss medical board’s decision not to recommend
mammography shows that for every 1000 women who undergo
screening one breast cancer death is averted (from five to four), while
cancer deaths either remain at 39 or may increase to 40. If
cancer deaths remain the same, a woman must weigh net
benefit against harms. If screening increases nonbreast
to 40, women would simply be trading one type of death for another, at
the cost of serious morbidity, anxiety, and expense. Women should be
told that to date, with over 600 000 women studied, there is no clear
evidence of a reduction in overall mortality with mammography
The public’s uncritical trust in screening programs help keep hidden the
significant harm they produce; harms that are further obfuscated by
research. The study cites the fact that, ” of 57 studies
[reviewed] only 7% quantified overdiagnosis and just 4% reported the rate of false
positive results. ” They also found that, ” When researchers do examine the harms
of screening the results are typically sobering”:
False positive results on breast cancer screening have been associated
with psychosocial distress as great as a breast cancer diagnosis 6
months after the event . False positive results affect over 60% of women
undergoing screening mammography for a decade or more, and 1213%
all men who have undergone three or four screening rounds with PSA. In
the NLST [National Lung Screening Trial] 39.1% of people had at least one
positive test result, of which 96.4% were false positives.
Overdiagnosis affected 18% of people diagnosed with lung cancer on low
dose CT in the NLST, and researchers have found that as many as one in
three diagnoses of invasive breast cancer (or one in two for invasive cancer
and carcinoma in situ) by mammography constitute overdiagnosis. These
numbers are broadly equivalent to those found with most major screening
There are also wellknown,
though rarely acknowledged, harms associated with
the screening technologies themselves. For instance, xray
a particular type of gamma radiation that has been found to have as much as
a six fold increased carcinogenicity . Another example is CT scans. It has
been estimated that .4% of all cancers in the U.S. are caused by them .
Clearly cancer screening programs that rely on intrinsically carcinogenic
diagnostic technologies (as well as carcinogenic treatments like
chemotherapy and radiotherapy) must be halted if they can not actually be
proven to “save lives,” which, I believe, the study clearly demonstrates.
The study concludes, powerfully:
We encourage healthcare providers to be frank about the limitations of
screening—the harms of screening are certain, but the benefits in overall
mortality are not. Declining screening may be a reasonable and prudent
choice for many people. Providers should also encourage participation in
We call for higher standards of evidence, not to satisfy an esoteric standard,
but to enable rational, shared decision making between doctors and
patients. As Otis Brawley, chief scientific and medical officer of the
American Cancer Society, often states: “We must be honest about what we
know, what we don’t know, and what we simply believe.”
For additional learning listen to the lead author Vinay Prasad ‘s interview from
the British Medical Journal’s website below:
Read more on this topic below:
‘Hidden Dangers’ of Mammograms Every Woman Should Know
Tuesday, July 16th 2013 at 12:30 pm
Sayer Ji, Founder
Millions of women undergo them annually, but few are
even remotely aware of just how many dangers they are
exposing themselves to in the name of prevention, not the
least of which are misdiagnosis, overdiagnosis and the
promotion of breast cancer itself.
A new study published in the Annals of Family Medicine titled, Longterm
psychosocial consequences of falsepositive
screening mammography , brings to
the forefront a major underreported harm of breast screening programs: the
very real and lasting trauma associated with a falsepositive
diagnosis of breast
The study found th at women with falsepositive
diagnoses of breast cancer, even
three years after being declared free of cancer , “consistently reported greater
negative psychosocial consequences compared with women who had normal
findings in all 12 psychosocial outcomes.”
The psychosocial and existential parameters adversely affected were:
● Sense of dejection
● Negative impact on behavior
● Negative impact on sleep
● Degree of breast selfexamination
● Negative impact on sexuality
● Feeling of attractiveness
● Ability to keep ‘mind off things’
● Worries about breast cancer
● Inner calm
● Social network
● Existential values
What is even more concerning is that “[S]ix months after final diagnosis, women
findings reported changes in existential values and inner
calmness as great as those reported by women with a diagnosis of breast
In other words, even after being “cleared of cancer,” the measurable adverse
psychospiritual effects of the trauma of diagnosis were equivalent to actually
having breast cancer.
Given that the cumulative probability of falsepositive
recall or biopsy
recommendation after 10 years of screening mammography is at least 50%, 
this is an issue that will affect the health of millions of women undergoing
routine breast screening.
The Curse of False Diagnosis and ‘BonePointing’
Also, we must be cognizant of the fact that these observed ‘psychosocial’ and
‘existential’ adverse effects don’t just cause some vaguely defined ‘mental
anguish,’ but translate into objectively quantifiable physiological consequences of
a dire nature.
For instance, last year, a groundbreaking study was published in the New
England Journal of Medicine showing that, based on data on more than 6 million
Swedes aged 30 and older, the risk of suicide was found to be up to 16 times
higher and the risk of heartrelated
death up to 26.9 times higher during the
first week following a positive versus a negative cancer diagnosis. 
This was the first study of its kind to confirm that the trauma of diagnosis can
result in, as the etymology of the Greek word trauma reveals, a “physical
wound.” In the same way as Aborigonal cultures had a ‘ritual executioner’ or
‘bone pointer’ known as a Kurdaitcha who by pointing a bone at a victim with the
intention of cursing him to death, resulting in the actual selfwilled
death of the
accursed, so too does the modern ritual of medicine reenact ancient belief
systems and power differentials, with the modern physician, whether he likes it
or not, a ‘priest of the body.’; we must only look to the wellknown
the placebo and nocebo effects to see these powerful, “irrational” processes still
Millions Harmed by Breast Screening Despite Assurances
to the Contrary
Research of this kind clearly indicates that the conventional screening process
carries health risks, both to body and mind, which may outstrip the very dangers
the medical surveillance believes itself responsible for, and effective at,
mitigating. For instance, according to a groundbreaking study published last
November in New England Journal of Medicine , 1.3 million US women were
overdiagnosed and overtreated over the past 30 years.  These are the ‘false
positives’ that were never caught, resulting in the unnecessary irradiation,
chemotherapy poisoning and surgery of approximately 43,000 women each
year. Now, when you add to this dismal statistic the millions of ‘false positives’
that while being caught nevertheless resulted in producing traumas within those
women, breast screening begins to look like a veritable nightmare of
‘Hidden Dangers’ of Mammograms Every Woman Should
Tuesday, July 16th 2013 at 12:30 pm
Sayer Ji, Founder
And this does not even account for the radiobiological dangers of the xray
mammography screening process itself, which may be causing an epidemic of
mostly unackowledged radiationinduced
breast cancers in exposed populations.
For instance, in 2006, a paper published in the British Journal of Radiobiology ,
titled “Enhanced biological effectiveness of low energy Xrays
for the UK breast screening programme,” revealed the type of radiation used in
breast screenings is much more carcinogenic than previously
Recent radiobiological studies have provided compelling evidence that
the low energy Xrays
as used in mammography are approximately four
possibly as much as six times more
effective in causing
mutational damage than higher energy Xrays.
Since current radiation
risk estimates are based on the effects of high energy gamma radiation,
this implies that the risks of radiationinduced
breast cancers for
are underestimated by the same factor. 
Even the breast cancer treatment protocols themselves have recently been
found to contribute to enhancing cancer malignancy and increasing mortality.
Chemotherapy and radiation both appear to enrich the cancer stem cell
populations , which are at the root of breast cancer malignancy and
invasiveness. Last year, in fact, the prestigious journal Cancer , a publication of
the American Cancer Society, published a study performed by researchers from
the Department of Radiation Oncology at the UCLA Jonsson Comprehensive
Cancer Center showing that even when radiation kills half of the tumor cells
treated, the surviving cells which are resistant to treatment, known as induced
breast cancer stem cells (iBCSCs), were up to 30 times more likely to form
tumors than the nonirradiated breast cancer cells. In other words, the radiation
treatment regresses the total population of cancer cells, generating the false
appearance that the treatment is working, but actually increases the ratio of
highly malignant to benign cells within that tumor, eventually leading to the
death of the patient. 
What we are increasingly bearing witness to in the biomedical literature itself is
that the conventional breast cancer prevention and treatment strategy and
protocols are bankrupt. Or, from the perspective of the more cynical observer, it
is immensely successful, owing to the fact that it is driving billions of dollars or
revenue by producing more of what it claims to be fighting.
The time has come for a radical transformation in the way that we understand,
screen for, prevent and treat cancer. It used to be that natural medical
advocates didn’t have the socalled
‘evidence’ to back up their
intuitive and/or anecdotal understanding of how to keep the human body in
health and balance. That time has passed. GreenMedInfo.com , for instance,
has over 20,000 abstracts indexed in support of a return to a medical model
where the ‘alternative’ is synthetic, invasive, emergency modeled
the norm is using food, herbs, minerals, vitamins and lifestyle changes to
maintain, promote and regain optimal health.
 John Brodersen, Volkert Dirk Siersma. Longterm
consequences of falsepositive
screening mammography. Ann Fam Med.
 Rebecca A Hubbard, Karla Kerlikowske, Chris I Flowers, Bonnie C Yankaskas,
Weiwei Zhu, Diana L Miglioretti. Cumulative probability of falsepositive
recall or biopsy recommendation after 10 years of screening
mammography: a cohort study. Ann Intern Med. 2011 Oct 18
 Research: Come Diagnoses Kill You Quicker Than The Cancer , April
 30 Years of Breast Screening: 1.3 Million Women Wrongly Treated ,
 GreenMedInfo.com, How XRay
Mammography Is Accelerating the
Epidemic of Breast Cancer , June 2012
 GreenMedInfo.com, Study: Radiation Therapy Can Make Cancers 30x
More Malignant , June 2012
● Millions Fall Prey To This Deadly Breast Cancer Myth
● Millions Wrongly Treated for ‘Cancer,’ National Cancer
● Pinkwashing Hell: Breast Removal as a Form of
● Thyroid Cancer Epidemic Caused by Misinformation, Not
● Is BRCA “Breast Cancer Gene” A Death Sentence
● 30 Years of Breast Screening: 1.3 Million Wrongly Treated
● Astounding Number of Medical Procedures Have No
● Ovarian Cancer: What We Think We Know May Harm Us
● Why Angelina Jolie Should Leave Her Ovaries Alone
● Breast Screenings Creating An Epidemic of
1 Review Cancer statistics , 2004. Jemal A, Tiwari RC, Murray T, Ghafoor A,
Samuels A, Ward E, Feuer EJ, Thun MJ, American Cancer Society CA Cancer J
Clin. 2004 JanFeb;
Sayer Ji is founder of Greenmedinfo.com , on the Board of Governors for the
National Health Federation , and Fearless Parent , Steering Committee
Member of the Global GMO Free Coalition (GGFC), a reviewer at the
International Journal of Human Nutrition and Functional Medicine .
Die opdracht is voor ons te zwaar gebleken,
aan deze eis konden wij niet voldoen.
Wij hebben ons vaak op onszelf verkeken,
wij dachten klaar te zijn met goed fatsoen.
Voor elk goed doel hebben wij geld gegeven,
de derde wereld altijd goed bedacht,
een eenzaam mens wel eens een brief geschreven,
een zieke wel eens een bezoek gebracht.
Maar was het wel de mens waaraan wij dachten,
is door ons wel in God’s naam leed verzacht?
Was ‘t liefde, of was het enkel plicht betrachten?
Had God misschien iets meer van ons verwacht?
Wij staan beschaamd Heer, voor uw heilige ogen,
want, als U onze daden samenvat,
blijkt, dat wij wel elkander lijden mogen,
maar slecht ons zelven hebben liefgehad.
Vergeef ons onze schuld, en al ons falen
waarvoor Uw zoon aan ‘t kruis zijn bloed vergoot,
om onze zonden stervend te betalen.
Hij heeft ons liefgehad tot in de dood.
# Enny IJskes-Kooger/ In zijn Schaduw
(DE WEG, DE WAARHEID EN BEVESTIGD DOOR ONDERTEKENDE)
Citizens of the world have been in the lead to create a New Humanism for the twenty-first century in keeping with the UNESCO call for “the development of a universal global consciousness based on dialogue in a climate of trust and mutual understanding.” World Citizens welcome the UNESCO-led Decade for the Rapprochement of Cultures (2013-2022). Thus we will highlight the creative efforts of individuals who have built bridges of understanding over the divides of cultures, social classes and ethnicity to create a foundation for the New Humanism.
Abraham H. Maslow(1908-1970) A Cultural-Bridge-builder through Psychology
Abraham Maslow was a US professor of psychology, most of his career at Brandeis University in Massachusetts.(1) Maslow’s writings cover a wide range from an early interest in anthropology to his later applications of humanistic psychology to business and education. His mature views are presented in a posthumous work The Farther Reaches of Human Nature (2). However, it is his work on the hierarchy of inborn needs and the concept of self-actualization which are most directly related to the Basic Needs Approach to Development Planning.
Maslow constructed what he called a “Needs Hierarchy” which he believed was trans-cultural, appearing in all human beings, in all cultures. His model is a six-level model which depicts a human energy flowing upward with each need leading to the next level when fulfilled:
Physical Needs: food, water, clothing, shelter, hygiene, and health care.
Safety-Security Needs: the need for psychological and physical safety, freedom from fear.
Belonging Needs: the need for human relationship, affiliation to others, affection and psychological warmth.
Esteem Needs: the need for a positive image of self, a sense of inner dignity and value, respêct and recognition from others.
Self- Actualization Needs: the need to develop one’s potential, for creative expression, a sense of direction of one’s life.
Transcendent Needs: the need to commune with Nature, to become enlightened, to live in harmony with universal principles. The transcendent needs, what Maslow also calls the “value life” (spiritual, religious, philosophical) “is an aspect of human biology and is on the same continuum with the ‘lower’ animal life (tather than being in separated, dichotomized or mutually exclusive realms). It is probably therefore species-wide, supracultural even though it must be actualized by culture in order to exist.”
Maslow held that these needs are an unfolding, evolving process in all human beings everywhere. The ways in which needs are fulfilled are influenced by specific cultures, but the needs are universal, and society must be structured so that all these needs can be met. His emphasis is on the oneness of humanity.
If needs are not fulfilled, Maslow held, this will lead an individual or a larger group to “metapathologies” such as meaninglessness, despair, apathy, resignation and fatalism. Thus we need to design and implement new social and economic arrangements that more closely fit the needs of human nature.
The first three levels of needs − Physical Needs,; Safety-Security Needs, and Belonging Needs − can be met within the household-family. It is on these three levels of needs that the ILO Basic Needs Approach is focused. Esteem Needs and Self-Actualization Needs are linked to the wider society and require cooperation with and action in the wider society.
Transcendent Needs are fulfilled both individually − a confidence that we are basically one with the cosmos instead of strangers to it − and within society as a person needs access to philosophical currents of thought in order to express to others this confidence in harmony.
Abraham Maslow provides a useful framework for using the Basic Needs Approach to Development Planning as based on the deepest nature of the person. Each person is an active, self-governing mover, chooser and center of his own life.
For an overview of Maslow’s life and writings see Edward Hoffman (Ed.) The Right to be Human: A Bibliography of Abraham Maslow (Los Angeles, CA: Tarcher Publishers, 1988)
Abraham H. Maslow. The Father Reaches of Human Nature (New York: The Viking Press, 1971)
Rene Wadlow, President Benevolent Earth Federation, President, Association of World Citizens
Compiled With Warm Wishes Omni Love & Planetary Service
Dr Tony Sunil VERMA
HELE FIJNE KERSTDAGEN EN EEN GELUKKIG NIEUW JAAR EN EEUWIGE VOORSPOED
MET VRIENDELIJKE GROETEN, WARM REGARDS,
JORN JAKOB ALBERT BOOR
ECONAMIC GLOBAL ORGANIZATIONS
DREAM AVENUE CORP.
PHANG ‘NDAWO COMMUNICATIONS AND PROJECTS (SA)
ZEEWOLDE, FLEVOLAND, KONINKRIJK DER NEDERLANDEN (LEEUW VAN JUDA 😉
Please … ! (The wicked 😉
In materially identical pieces, US Army propagandist Colonel Steve Warren fires off several rounds of bullet points to the effect that Islamic State is “beginning to crack” under the “onslaught of air strikes and counter-terror measures”, and that would-be jihadis are realising that “this caliphate isn’t all unicorns and rainbows” – whatever that means.
His main bombardment consists of the notion that we have the US to thank for this progress.
He says: “We see them having lost about 40% of the territory that they held at the pinnacle of their strength in Iraq, and they have lost about 10% of the territory they once held in Syria. We believe this failure is due to several factors, the first and foremost I believe is the presence of devastating Coalition air power.
“Second is the increasing capability of the Iraqi security forces in Iraq.
“And third is the increasing cohesion, strength and unity of the 65-nation coalition that has come together to defeat Daesh.”
The problem with the colonel’s presentation from a propaganda point of view is that it is not very good. It is too blunt. People are more sophisticated than this – even people who spend a lot of time on the Daily Mail website.
Whether you like Putin or not, his decisiveness in Syria was a game-changer. After years of pretending to chase ISIS, the US was suddenly shown to have been merely simulating activity for entirely different ends.
Russia, on the other hand, went in to clean house and has worked effectively with the army of Syria to that end.
Any grown-up apportioning of credit which leaves out the key players discredits itself before it begins. It’s like trying to discuss ‘Gone with the Wind’ without reference to either Scarlett or Rhett.
Yet this is Colonel Warren’s approach. He simply disregards everything which doesn’t suit him: the fact that the US-led“coalition” has no legal mandate in Syria; that Assad is the democratically elected head of that country; that the US is engaged in destroying arbitrarily and unilaterally (in fact, if not in name) yet another foreign state which has done it no harm; that – finally and emphatically – if the resolve of this group of US-sponsored terrorists is “beginning to crack”, it is thanks entirely to the efforts of the Syrian Army and the air support provided by the only outside country to have a legitimate role in Syria: Russia.
The colonel has a tough job on his hands. Too many people now understand that what is needed for the defeat of so-called Islamic State, is for the US government to cease providing it with arms and funding. They know that this chameleon-like organization only makes any sense when seen as an arm of US-Israeli policy; to Balkanise the Middle East and make it easier to control, and to facilitate the theft of their wealth by corporations; its secondary purpose – in combination with the efforts of men like George Soros – is to create the blowback needed to justify an invasion of Europe by an army of aggressive, unskilled young men in order to asset-strip it, drag it into chaos, and prevent what the US fears most: a new union built around a freer Germany co-joined with Russia.
Agents of US military ambitions such as the CIA have always created media outlets – or subverted existing ones – as part of propaganda wars against foreign countries. But two thorns appeared in the side of this policy in recent years.
Firstly, in a world connected by the internet, propaganda intended for foreign consumption routinely feeds back into the US arena, thereby falling foul of laws against the use of US tax dollars to conduct informational war against the taxpayers themselves.
Secondly, again in a world connected by the internet, US citizens are increasingly discovering uncomfortable truths about everything from 9/11 to banking and medical cartels. Thus, distrust of government internally is a growing problem for those tasked with managing the population.
As ForeignPolicy.com notes, the philosophical foundation for the policy of not using US tax dollars to propagandize US citizens was to maintain a distinction between the US and the Soviet Union.
Happily for those burdened by such concerns, the Soviet Union is no more, and in 2013 the US removed the last fig leaf of pretense that the US government did not engage in propaganda internally.
This makes everything simpler. The war propaganda machine can lump the US citizenry together with everyone else and get on with the job of bringing – or keeping – everyone on board with whatever the Pentagon’s masters wish it to do.
In a war scenario – and the US is almost always making war on someone – budgets extend and keep the military industrial complexmaking more for its shareholders.
The guiding principle of those who get the government contracts which collectively make wars happen is to ensure that the last penny gets spent so that the same amount – or preferably more – will be forthcoming in the next financial period.
The US military, then, is a huge state-run business in the form of a bureaucratic and accounting machine tasked with blowing things up as a function of securing funding. It manages to kill a lot of people, but it is a blunt if heavy instrument. It was described to me by a friend – then himself serving in US Army Intelligence – as “the Post Office with bombs”.
Like any large organization it works on a system of protocols. There will be a manual somewhere which tells you what a nail is, what a hammer is, and how the two are to be used together. Then – in the context of the US military – there will be a Halliburton contract for huge numbers of hammers and nails, and a general somewhere with shares in Halliburton who will find a use for both.
In the view of military planners, then, propaganda is just another hammer.
Recent figures are conspicuously hard to find, but Fox News reported back in 2009 that the Pentagon was spending $4.7 billion annually on what it calls “the human terrain” of world public opinion, employing an army of 27,000 whose sole mission was the promotion of US military objectives via media.
What the numbers are today, we can only guess. But it won’t be less – because it never is.
Planners of military propaganda will allocate their budgets the same as any public relations company. That is, they will focus on predetermined demographics and opinion leaders – endlessly testing and measuring to see what is working, what not – and using the data thus generated to provide the engine for new, expensive proposals to the client.
The delivery method – or ‘tactic’ to use public-relations speak – which Colonel Warren employed this week has become almost hackneyed through over-use. It is to claim the narrative as fully owned; to assume the conclusions one wishes to see as given; to deny the existence of either the legitimate claims – or participation – of any party or view but those which suit one’s agenda.
While many consumers of propaganda may not understand this model intellectually, increasing numbers recognize it viscerally. They have an intuitive appreciation of the genre, having been exposed to it many thousands of times. The state of Israel uses it to exclude the Palestinians from any debate about their own land; corporations use it to achieve and maintain monopolies; governments use it to keep people paying income tax. Subconsciously, the model is associated with various forms of oppression and tyranny.
This hammer-and-nail school of propaganda has its roots in a vertical and uniform, pre-internet informational model. The model today is broad and disparate; people can shop for opinions.
The colonel’s activities this week will have helped eat some small part of the budget, but I would argue that this particular assault was a bad use of resources. It is too on-the-nose. It looks too much like what it is: a hammer.
And as people rumble the type of tactic employed above by the Pentagon, they tend to regard the source thereafter as unreliable on point of principle – not a good outcome if your goal is to cultivate “the human terrain” of world public opinion to your advantage.
The Goat (Chinese: 羊; pinyin: Yang) is the eighth sign of the 12-year cycle of animals that appear in the Chinese zodiac related to the Chinese calendar. The sign is also referred to as the Ram or Sheep sign, since the Chinese word Yang is more accurately translated as Capri-nae, a taxonomic subfamily which includes both sheep and goats.
Basic Astrology Elements
Earthly Branch of Birth Year: Wei
The Five Elements: Earth(Tu)
Yin Yang: Yin ([Chinese philosophy] negative/passive/female principle in nature)
Lucky Numbers: 3, 9, 4; Avoid: 6, 7, 8
Lucky Flowers: carnation, primrose, alice flower
Lucky Colors: green, red, purple; Avoid: golden, coffee
“The Goat, along with the Goat’s Horn, provides us with an abundance of Symbolism and Mythology. The Goat is also a Metaphor which expresses itself in our daily lives.
The Goat is a sure-footed animal that is as much at home on mountain slopes and mountain tops as it is on flat ground. In this aspect, the Goat is a Symbol of agility. And, in its ability to scale a mountain, it is a Symbol of determination.
The male Goat, because of its reproductive prowess, is a Symbol of virility, vitality, potency, and stamina. It therefore represents the energy of the creative and regenerative Seed: traits which are expressed in the Mythology and Symbolism of the Greek Goat God Pan. Pan is the forest Deity that is the creative and regenerative spirit for all the plant and animal life which inhabit His domain.
The female Goat is a Symbol of nurturing and nourishment.
The she-Goat Amalthea is the nurturing Goddess who was the wet-nurse, or nursemaid, of the Greek God Zeus. From the she-Goat we also get the Word, Nanny: a term we use today for the person we charge and entrust with caring for our infants and young children in our absence.
The milk of the female Goat is a sustenance fed to infants. Its high quality and nourishment is a Compatible substitute for mother’s milk and for formulas which may not be agreeable to infants.
The Horn of the Goat, both male and female, is both Symbolic and practical.
It is the Horn of the she-Goat Amalthea, the wet-nurse of Zeus, which is the Symbolic Cornucopia: the Horn of Plenty, the Horn of perpetual abundance.
The Horn was also used as a drinking vessel in antiquity and is a dual Symbol in that it is both masculine and feminine.
When pointed upwards, the Horn is the masculine, penetrating, and assertive, phallic Symbol. When pointed downwards, it is the feminine, receptive cup or chalice (womb).
This dual Symbolism, therefore, is representative of the yin/yang energy. And the the combination of both the upturned Horn along with the downturned Horn carries the same sacred Symbolism and Esoteric meaning as the six-pointed “Star of David”.
Those of us who are Harry Potter fans are familiar with the Bezoar. A Bezoar is a stone which grows in the stomach of animals. Bezoars removed from a Goat were considered the most potent and were ground up and used as elixirs and antidotes for poisoning.
Bezoars were sought because they were believed to have the power of a universal antidote against any poison. It was believed that a drinking glass which contained a Bezoar would neutralize any poison poured into it. The word “bezoar” comes from the
Persian pâdzahr (پادزهر), which literally means “protection from poison.” (from: Wikipedia.com)
As we can see, the Goat is not only a Symbolic and Mythological creature, it is also an animal which provides practical and nutritional benefits to mankind. And it is easy to understand why this animal was so honored and cherished by ancient cultures throughout the world.”
Retrieved 2/28/2016: http://www.aseekersthoughts.com/…/goat-goats-horn-and-bezoa…
See all lists…
Econamic Global Organizations #EGO
Johannesburg, South Africa Area
Kees Becker Logistiek B.V.
Bakker Logistics & Transport
The 19th century image of a Sabbatic Goat, created by Eliphas Levi. The arms bear theLatin words SOLVE(separate) and COAGULA (join together), i.e., the power of “binding and loosing” usurped from God and, according to Catholic tradition, from the ecclesiastical hierarchy acting as God’s representative on Earth.
The original goat pentagram first appeared in the book “La Clef de la Magie Noire” by French occultistStanislas de Guaita, in 1897. This symbol would later become synonymous with Baphomet, and is commonly referred to as the Sabbatic Goat. Samael is a figure inTalmudiclore and Lilith, a female demon inJewish mythology. The Hebrew letters at the five points of the pentagram spell out Leviathan, a mythic creature in Jewish lore. This symbol was later adapted by the Church of Satan in 1969 and officially named the Sigil of Baphomet
Baphomet (/ˈbæfɵmɛt/; from Medieval Latin Baphometh, Baffometi, Occitan Bafometz) is a term originally used to describe an idol or other deity that the Knights Templar were accused of worshiping, and that subsequently was incorporated into disparate occult and mystical traditions. It appeared as a term for a pagan idol in trial transcripts of the Inquisition of the Knights Templar in the early 14th century. The name first came into popular English usage in the 19th century, with debate and speculation on the reasons for the suppression of the Templars.
Since 1856, the name Baphomet has been associated with a “Sabbatic Goat” image drawn by Eliphas Levi which contains binary elements representing the “sum total of the universe” (e.g. male and female, good and evil, etc.).
The name Baphomet appeared in July 1098 in a letter by the crusader Anselm of Ribemont:
Sequenti die aurora apparente, altis vocibusBaphometh invocaverunt; et nos Deum nostrum in cordibus nostris deprecantes, impetum facientes in eos, de muris civitatis omnes expulimus.
As the next day dawned, they called loudly upon Baphometh; and we prayed silently in our hearts to God, then we attacked and forced all of them outside the city walls.
A chronicler of the First Crusade, Raymond of Aguilers, called the mosques Bafumarias. The name Bafometz later appeared around 1195 in the Occitan poem “Senhors, per los nostres peccatz” by the troubadour Gavaudan. Around 1250 a poem bewailing the defeat of the Seventh Crusade by Austorc d’Aorlhac refers to Bafomet. De Bafomet is also the title of one of four surviving chapters of an Occitan translation of Ramon Llull‘s earliest known work, the Libre de la doctrina pueril, “book on the instruction of children”.
Two Templars burned at the stake, from a French 15th-century manuscript
When the medieval order of the Knights Templar was suppressed by King Philip IV of France, on Friday October 13, 1307, Philip had many French Templars simultaneously arrested, and then tortured into confessions. Over 100 different charges had been leveled against the Templars. Most of them were dubious, as they were the same charges that were leveled against the Cathars and many of King Philip’s enemies; he had earlier kidnapped Pope Boniface VIII and charged him with near identical offenses of heresy, spitting and urinating on the cross, and sodomy. Yet Malcolm Barber observes that historians “find it difficult to accept that an affair of such enormity rests upon total fabrication”. The “Chinon Parchmentsuggests that the Templars did indeed spit on the cross,” says Sean Martin, and that these acts were intended to simulate the kind of humiliation and torture that a Crusader might be subjected to if captured by the Saracens, where they were taught how to commit apostasy “with the mind only and not with the heart”. Similarly Michael Haag suggests that the simulated worship of Baphomet did indeed form part of a Templar initiation ritual.
The indictment (acte d’accusation) published by the court of Rome set forth … “that in all the provinces they had idols, that is to say, heads, some of which had three faces, others but one; sometimes, it was a human skull … That in their assemblies, and especially in their grand chapters, they worshipped the idol as a god, as their saviour, saying that this head could save them, that it bestowed on the order all its wealth, made the trees flower, and the plants of the earth to sprout forth.”
The name Baphomet comes up in several of these confessions. Peter Partner states in his 1987 book The Knights Templar and their Myth, “In the trial of the Templars one of their main charges was their supposed worship of a heathen idol-head known as a ‘Baphomet’ (‘Baphomet’ = Mahomet = Muhammad).” The description of the object changed from confession to confession. Some Templars denied any knowledge of it. Others, under torture, described it as being either a severed head, a cat, or a head with three faces. The Templars did possess several silver-gilt heads as reliquaries, including one markedcapud lviiim, another said to be St. Euphemia, and possibly the actual head ofHugues de Payens. The claims of an idol named Baphomet were unique to the Inquisition of the Templars. Karen Ralls, author of the Knights Templar Encyclopedia, argues that it is significant that “no specific evidence [of Baphomet] appears in either the Templar Rule or in other medieval period Templar documents.”
Gauserand de Montpesant, a knight of Provence, said that their superior showed him an idol made in the form of Baffomet; another, named Raymond Rubei, described it as a wooden head, on which the figure of Baphomet was painted, and adds, “that he worshipped it by kissing its feet, and exclaiming, ‘Yalla,’ which was,” he says, “verbum Saracenorum,” a word taken from the Saracens. A templar of Florence declared that, in the secret chapters of the order, one brother said to the other, showing the idol, “Adore this head — this head is your god and your Mahomet.”
Modern scholars such as Peter Partner and Malcolm Barber agree that the name of Baphomet was an Old French corruption of the name Muhammad, with the interpretation being that some of the Templars, through their long military occupation of the Outremer, had begun incorporating Islamic ideas into their belief system, and that this was seen and documented by the Inquisitors as heresy. Alain Demurger, however, rejects the idea that the Templars could have adopted the doctrines of their enemies. Helen Nicholson writes that the charges were essentially “manipulative” — the Templars “were accused of becoming fairy-tale Muslims.” Medieval Christians believed that Muslims wereidolatrous and worshipped Muhammad as a god, with mahomet becoming mammet in English, meaning an idol or false god. This idol-worship is attributed to Muslims in several chansons de geste. For example, one finds the gods Bafum e Travagan in a Provençal poem on the life of St. Honorat, completed in 1300. In the Chanson de Simon Pouille, written before 1235, a Saracen idol is called Bafumetz.
While modern scholars and the Oxford English Dictionary state that the origin of the name Baphomet was a probable Old French version of “Mahomet”,alternative etymologies have also been proposed:
What properly was the sign of the Baffomet, ‘figura Baffometi,’ which was depicted on the breast of the bust representing the Creator, cannot be exactly determined … I believe it to have been the Pythagorean pentagon (Fünfeck) of health and prosperity: … It is well known how holy this figure was considered, and that the Gnostics had much in common with the Pythagoreans. From the prayers which the soul shall recite, according to the diagram of the Ophite-worshippers, when they on their return to God are stopped by the Archons, and their purity has to be examined, it appears that these serpent-worshippers believed they must produce a token that they had been clean on earth. I believe that this token was also the holy pentagon, the sign of their initiation (τελειας βαφης μετεος).
Joseph von Hammer-Purgstall(1774-1856) associated a series of carved or engraved figures found on a number of supposed 13th century Templar artifacts (such as cups, bowls and coffers) with the Baphometic idol.
In 1818, the name Baphomet appeared in the essay by the Viennese Orientalist Joseph Freiherr von Hammer-Purgstall,Mysterium Baphometis revelatum, seu Fratres Militiæ Templi, qua Gnostici et quidem Ophiani, Apostasiæ, Idoloduliæ et Impuritatis convicti, per ipsa eorum Monumenta (“Discovery of the Mystery of Baphomet, by which the Knights Templars, like the Gnostics and Ophites, are convicted of Apostasy, of Idolatry and of moral Impurity, by their own Monuments”), which presented an elaborate pseudohistory constructed to discredit Templarist Masonry and, by extension, Freemasonry.Following Nicolai, he argued, using as archaeological evidence “Baphomets” faked by earlier scholars and literary evidence such as the Grail romances, that the Templars were Gnostics and the “Templars’ head” was a Gnostic idol called Baphomet.
His chief subject is the images which are called Baphomet … found in several museums and collections of antiquities, as in Weimar … and in the imperial cabinet in Vienna. These little images are of stone, partly hermaphrodites, having, generally, two heads or two faces, with a beard, but, in other respects, female figures, most of them accompanied by serpents, the sun and moon, and other strange emblems, and bearing many inscriptions, mostly in Arabic … The inscriptions he reduces almost all toMete[, which] … is, according to him, not the Μητις of the Greeks, but the Sophia, Achamot Prunikos of the Ophites, which was represented half man, half woman, as the symbol of wisdom, unnatural voluptuousness and the principle of sensuality … He asserts that those small figures are such as the Templars, according to the statement of a witness, carried with them in their coffers. Baphomet signifies Βαφη Μητεος,baptism of Metis, baptism of fire, or the Gnostic baptism, an enlightening of the mind, which, however, was interpreted by the Ophites, in an obscene sense, as fleshly union … the fundamental assertion, that those idols and cups came from the Templars, has been considered as unfounded, especially as the images known to have existed among the Templars seem rather to be images of saints.
Hammer’s essay did not pass unchallenged, and F. J. M. Raynouard published an “Etude sur ‘Mysterium Baphometi revelatum'” in Journal des savants the following year. Charles William King criticized Hammer saying he had been deceived by “the paraphernalia of … Rosicrucian or alchemical quacks,” and Peter Partner agreed that the images “may have been forgeries from the occultist workshops.” At the very least, there was little evidence to tie them to the Knights Templar — in the 19th century some European museums acquired such pseudo-Egyptian objects, which were catalogued as “Baphomets” and credulously thought to have been idols of the Templars.
Later in the 19th century, the name of Baphomet became further associated with the occult.Eliphas Levi publishedDogme et Rituel de la Haute Magie (“Dogmas and Rituals of High Magic”) as two volumes (Dogme 1854, Rituel 1856), in which he included an image he had drawn himself which he described as Baphomet and “The Sabbatic Goat”, showing a winged humanoid goat with a pair of breasts and a torch on its head between its horns (illustration, top). This image has become the best-known representation of Baphomet. Lévi considered the Baphomet to be a depiction of the absolute in symbolic form and explicated in detail his symbolism in the drawing that served as the frontispiece:
The goat on the frontispiece carries the sign of the pentagram on the forehead, with one point at the top, a symbol of light, his two hands forming the sign of occultism, the one pointing up to the white moon of Chesed, the other pointing down to the black one of Geburah. This sign expresses the perfect harmony of mercy with justice. His one arm is female, the other male like the ones of the androgyne of Khunrath, the attributes of which we had to unite with those of our goat because he is one and the same symbol. The flame of intelligence shining between his horns is the magic light of the universal balance, the image of the soul elevated above matter, as the flame, whilst being tied to matter, shines above it. The beast’s head expresses the horror of the sinner, whose materially acting, solely responsible part has to bear the punishment exclusively; because the soul is insensitive according to its nature and can only suffer when it materializes. The rod standing instead of genitals symbolizes eternal life, the body covered with scales the water, the semi-circle above it the atmosphere, the feathers following above the volatile. Humanity is represented by the two breasts and the androgyne arms of this sphinx of the occult sciences.
Lévi’s depiction of Baphomet is similar to that of the Devil in early Tarot cards. Lévi, working with correspondences different from those later used by S. L. MacGregor Mathers, “equated the Devil Tarot key with Mercury,” giving “his figure Mercury’s caduceus, rising like a phallus from his groin.”
Lévi believed that the alleged devil worship of the medieval Witches’ Sabbath was a perpetuation of ancient pagan rites. A goat with a candle between its horns appears in medieval witchcraft records, and other pieces of lore are cited in Dogme et Rituel.
Below this figure we read a frank and simple inscription — THE DEVIL. Yes, we confront here that phantom of all terrors, the dragon of the all theogenies, the Ahriman of the Persians, the Typhon of the Egyptians, the Python of the Greeks, the old serpent of the Hebrews, the fantastic monster, the nightmare, the Croquemitaine, the gargoyle, the great beast of the Middle Ages, and — worse than all these — the Baphomet of the Templars, the bearded idol of the alchemist, the obscene deity of Mendes, the goat of the Sabbath. The frontispiece to this ‘Ritual’ reproduces the exact figure of the terrible emperor of night, with all his attributes and all his characters…. Yes, in our profound conviction, the Grand Masters of the Order of Templars worshipped the Baphomet, and caused it to be worshipped by their initiates; yes, there existed in the past, and there may be still in the present, assemblies which are presided over by this figure, seated on a throne and having a flaming torch between the horns. But the adorers of this sign do not consider, as do we, that it is a representation of the devil; on the contrary, for them it is that of the god Pan, the god of our modern schools of philosophy, the god of the Alexandrian theurgic school and of our own mystical Neoplatonists, the god ofLamartine and Victor Cousin, the god ofSpinoza and Plato, the god of the primitive Gnostic schools; the Christ also of the dissident priesthood…. The mysteries of the Sabbath have been variously described, but they figure always in grimoires and in magical trials; the revelations made on the subject may be classified under three heads — 1. those referring to a fantastic and imaginary Sabbath; 2. those which betray the secrets of the occult assemblies of veritable adepts; 3. revelations of foolish and criminal gatherings, having for their object the operations of black magic.
Lévi’s Baphomet, for all its modern fame, does not match the historical descriptions from the Templar trials, although it may also have been partly inspired by grotesque carvings on the Templar churches of Lanleff in Brittany and Saint-Merri in Paris, which depict squatting bearded men with bat wings, female breasts, horns and the shaggy hindquarters of a beast,[unreliable source?] as well as Viollet-le-Duc‘s vivid gargoylesthat were added to Notre Dame de Paris about the same time as Lévi’s illustration.
This article appears to contain unverifiable speculation and unjustified claims. Information must be verifiableand based on reliable published sources. Please remove unverified speculation from the article. (June 2013)
Lévi called his image “The Goat of Mendes”, possibly following Herodotus‘ account that the god of Mendes — the Greek name for Djedet, Egypt — was depicted with a goat’s face and legs. Herodotus relates how all male goats were held in great reverence by the Mendesians, and how in his time a woman publicly copulated with a goat. E. A. Wallis Budge writes,
At several places in the Delta, e.g. Hermopolis, Lycopolis, and Mendes, the god Pan and a goat were worshipped; Strabo, quoting (xvii. 1, 19) Pindar, says that in these places goats had intercourse with women, and Herodotus (ii. 46) instances a case which was said to have taken place in the open day. The Mendisians, according to this last writer, paid reverence to all goats, and more to the males than to the females, and particularly to one he-goat, on the death of which public mourning is observed throughout the whole Mendesian district; they call both Pan and the goat Mendes, and both were worshipped as gods of generation and fecundity. Diodorus (i. 88) compares the cult of the goat of Mendes with that of Priapus, and groups the god with the Pans and the Satyrs. The goat referred to by all these writers is the famous Mendean Ram, or Ram of Mendes, the cult of which was, according to Manetho, established byKakau, the king of the IInd dynasty.
Historically, the deity that was venerated at Egyptian Mendes was a ram deityBanebdjedet (literally Ba of the lord of djed, and titled “the Lord of Mendes”), who was the soul of Osiris. Lévi combined the images of the Tarot of Marseilles Devil card and refigured the ram Banebdjed as a he-goat, further imagined by him as “copulator in Anep and inseminator in the district of Mendes”.
The Baphomet of Lévi was to become an important figure within the cosmology of Thelema, the mystical system established by Aleister Crowley in the early twentieth century. Baphomet features in the Creed of the Gnostic Catholic Church recited by the congregation in The Gnostic Mass, in the sentence: “And I believe in the Serpent and the Lion, Mystery of Mysteries, in His name BAPHOMET.”
In Magick (Book 4), Crowley asserted that Baphomet was a divine androgyne and “the hieroglyph of arcane perfection”: Seen as that which reflects. “What occurs above so reflects below, or As above so below”
The Devil does not exist. It is a false name invented by the Black Brothers to imply a Unity in their ignorant muddle of dispersions. A devil who had unity would be a God… ‘The Devil’ is, historically, the God of any people that one personally dislikes… This serpent, SATAN, is not the enemy of Man, but He who made Gods of our race, knowing Good and Evil; He bade ‘Know Thyself!’ and taught Initiation. He is ‘The Devil’ of the Book of Thoth, and His emblem is BAPHOMET, the Androgyne who is the hieroglyph of arcane perfection… He is therefore Life, and Love. But moreover his letter is ayin, the Eye, so that he is Light; and his Zodiacal image is Capricornus, that leaping goat whose attribute is Liberty.
For Crowley, Baphomet is further a representative of the spiritual nature of the spermatozoa while also being symbolic of the “magical child” produced as a result ofsex magic. As such, Baphomet represents the Union of Opposites, especially as mystically personified in Chaos and Babalon combined and biologically manifested with the sperm and egg united in the zygote.
Crowley proposed that Baphomet was derived from “Father Mithras”. In his Confessions he describes the circumstances that led to this etymology:
I had taken the name Baphomet as my motto in the O.T.O. For six years and more I had tried to discover the proper way to spell this name. I knew that it must have eight letters, and also that the numerical and literal correspondences must be such as to express the meaning of the name in such a ways as to confirm what scholarship had found out about it, and also to clear up those problems which archaeologists had so far failed to solve … One theory of the name is that it represents the words βαφὴ μήτεος, the baptism of wisdom; another, that it is a corruption of a title meaning “Father Mithras”. Needless to say, the suffix R supported the latter theory. I added up the word as spelt by the Wizard. It totalled 729. This number had never appeared in my Cabbalistic working and therefore meant nothing to me. It however justified itself as being the cube of nine. The word κηφας, the mystic title given by Christ to Peter as the cornerstone of the Church, has this same value. So far, the Wizard had shown great qualities! He had cleared up the etymological problem and shown why the Templars should have given the name Baphomet to their so-called idol. Baphomet was Father Mithras, the cubical stone which was the corner of the Temple.
Lévi’s Baphomet is the source of the later Tarot image of the Devil in the Rider-Waite design. The concept of a downward-pointing pentagram on its forehead was enlarged upon by Lévi in his discussion (without illustration) of the Goat of Mendes arranged within such a pentagram, which he contrasted with the microcosmic man arranged within a similar but upright pentagram. The actual image of a goat in a downward-pointing pentagram first appeared in the 1897 book La Clef de la Magie Noire by Stanislas de Guaita. It was this image that was later adopted as the official symbol — called the Sigil of Baphomet — of the Church of Satan, and continues to be used among Satanists.
Promotional poster for Léo Taxil, Les Mystères de la franc-maçonnerie dévoilés (1886), adapts Lévi’s invention.
Baphomet, as Lévi’s illustration suggests, has occasionally been portrayed as a synonym ofSatan or a demon, a member of the hierarchy of Hell. Baphomet appears in that guise as a character in James Blish‘s The Day After Judgment. Christian evangelist Jack T. Chickclaims that Baphomet is a demon worshipped by Freemasons, a claim that apparently originated with the Taxil hoax. Léo Taxil‘s elaborate hoax employed a version of Lévi’s Baphomet on the cover of Les Mystères de la franc-maçonnerie dévoilés, his lurid paperback “exposé” of Freemasonry, which in 1897 he revealed as a hoax intended to ridicule the Catholic Church and its anti-Masonic propaganda.
Wikimedia Commons has media related to Baphomet.
The Ladder, which is rich in Symbolism and Metaphor, consists
of Horizontal Rungs and two Vertical Uprights. The Horizontal
rungs represent progressively higher levels of consciousness
and the two Vertical Uprights, (I I), represent the symbol for
The horizontal rungs remind us that our spiritual journey is
upward, and the vertical uprights remind us that our upward
journey occurs within the realm of duality.
As The Ladder has no moving parts, it symbolizes ascension
by way of personal Desire and Effort. The Ladder also reminds us
that reaching the highest realms of consciousness is not a short,
swift journey. Each rung represents a gradual ascent whereby
Wisdom, knowledge, enlightenment and perfection are earned
by us one step at a time. However, we must also keep in mind
that no journey is without its rests and pauses. Therefore,
whenever we require a respite during our spiritual ascent, the
rungs of The Ladder provide us with the support and strength we
need until we are ready to take our next step upward.
In addition, The Symbol of the Ladder also reminds us that our
upward ascent, from the bottom-most rung to the upper-most rung,
is a journey through the realm of duality. Before we can step up
to the next higher rung on The Ladder we must first experience and
master the Lessons of duality which exist at our current level of
As we steadfastly ascend the rungs of The Ladder we slowly elevate
ourselves, higher and higher, above the lower plane of the superficial
and mundane. Upon reaching the higher rungs of The Ladder, we
begin to breathe in the rarefied air of Higher Consciousness. It is at
these higher levels of consciousness that the mysteries of Eternity
slowly begin to un–veil their secrets to us.
When we reach the top-most rung of The Ladder we enter the
realm of highest consciousness. We have now left the frustrations,
confusions and restraints of the material world well below us and
are now finally able to enjoy the purified realm angels, higher spirits,
and Divinity. Through our personal Desire and Effort we have
Transcended Duality and achieved entrance into the Infinite
Domain of Enlightenment and Unity.
Comments and Emails: I welcome comments and emails from
people with similar thoughts and feelings. My email address is
located in the upper-left area of this page. Comments can be
posted by using the “Comment” link located below each article.
Also: If you found value in this article please feel free to forward
it to other like-minded individuals, organizations and sites.
Disclaimer: None of my articles should be considered to be
either advice or expertise. They are simply personal opinions
and no more. Everyone is encouraged to seek competent
advice from a licensed, registered, or certified professional
should such advice or service be required.
© copyright Joseph Panek 2009
|This article needs additional citations for verification. (September 2015)|
Marduk (Sumerian spelling in Akkadian: dAMAR.UTU 𒀭𒀫𒌓 “solar calf”; GreekΜαρδοχαῖος,Mardochaios) was a late-generation god from ancient Mesopotamia and patron deity of the city of Babylon. When Babylon became the political center of the Euphrates valley in the time of Hammurabi (18th century BC), he slowly started to rise to the position of the head of the Babylonian pantheon, a position he fully acquired by the second half of the second millennium BC. In the city of Babylon, he resided in the temple Esagila. “Marduk” is the Babylonian form of his name.
According to The Encyclopedia of Religion, the name Marduk was probably pronounced Marutuk. The etymology of the name Marduk is conjectured as derived from amar-Utu (“bull calf of the sun god Utu”). The origin of Marduk’s name may reflect an earlier genealogy, or have had cultural ties to the ancient city of Sippar (whose god was Utu, the sun god), dating back to the third millennium BC.
Marduk’s original character is obscure but he was later associated with water, vegetation, judgment, and magic. His consort was the goddess Sarpanit. He was also regarded as the son of Ea (Sumerian Enki) and Damkina and the heir of Anu, but whatever special traits Marduk may have had were overshadowed by the political development through which the Euphrates valley passed and which led to people of the time imbuing him with traits belonging to gods who in an earlier period were recognized as the heads of the pantheon. There are particularly two gods—Ea and Enlil—whose powers and attributes pass over to Marduk.
In the case of Ea, the transfer proceeded pacifically and without effacing the older god. Marduk took over the identity of Asarluhi, the son of Ea and god of magic, so that Marduk was integrated in the pantheon of Eridu where both Ea and Asarluhi originally came from. Father Ea voluntarily recognized the superiority of the son and hands over to him the control of humanity. This association of Marduk and Ea, while indicating primarily the passing of the supremacy once enjoyed by Eridu to Babylon as a religious and political centre, may also reflect an early dependence of Babylon upon Eridu, not necessarily of a political character but, in view of the spread of culture in the Euphrates valley from the south to the north, the recognition of Eridu as the older centre on the part of the younger one.
|Part of a series on|
While the relationship between Ea and Marduk is marked by harmony and an amicable abdication on the part of the father in favour of his son, Marduk’s absorption of the power and prerogatives of Enlil of Nippur was at the expense of the latter’s prestige. Babylon became independent in the early 19th century BC, and was initially a small city state, overshadowed by older and more powerful Mesopotamian states such as Isin, Larsa and Assyria. However, afterHammurabi forged an empire in the 18th century BC, turning Babylon into the dominant state in the south, the cult of Marduk eclipsed that of Enlil; although Nippur and the cult of Enlil enjoyed a period of renaissance during the over four centuries of Kassite control in Babylonia (c. 1595 BC–1157 BC), the definite and permanent triumph of Marduk over Enlil became felt within Babylonia.
The only serious rival to Marduk after ca. 1750 BC was the god Aššur (Ashur) (who had been the supreme deity in the northern Mesopotamian state of Assyria since the 25th century BC) which was the dominant power in the region between the 14th to the late 7th century BC. In the south, Marduk reigned supreme. He is normally referred to as Bel“Lord”, also bel rabim “great lord”, bêl bêlim “lord of lords”, ab-kal ilâni bêl terêti “leader of the gods”, aklu bêl terieti “the wise, lord of oracles”, muballit mîte “reviver of the dead”, etc.
When Babylon became the principal city of southern Mesopotamia during the reign of Hammurabi in the 18th century BC, the patron deity of Babylon was elevated to the level of supreme god. In order to explain how Marduk seized power, Enûma Elish was written, which tells the story of Marduk’s birth, heroic deeds and becoming the ruler of the gods. This can be viewed as a form of Mesopotamian apologetics. Also included in this document are the fifty names of Marduk.
In Enûma Elish, a civil war between the gods was growing to a climactic battle. The Anunnaki gods gathered together to find one god who could defeat the gods rising against them. Marduk, a very young god, answered the call and was promised the position of head god.
To prepare for battle, he makes a bow, fletches arrows, grabs a mace, throws lightning before him, fills his body with flame, makes a net to encircle Tiamat within it, gathers the four winds so that no part of her could escape, creates seven nasty new winds such as the whirlwind and tornado, and raises up his mightiest weapon, the rain-flood. Then he sets out for battle, mounting his storm-chariot drawn by four horses with poison in their mouths. In his lips he holds a spell and in one hand he grasps a herb to counter poison.
First, he challenges the leader of the Anunnaki gods, the dragon of the primordial sea Tiamat, to single combat and defeats her by trapping her with his net, blowing her up with his winds, and piercing her belly with an arrow.
Then, he proceeds to defeat Kingu, who Tiamat put in charge of the army and wore the Tablets of Destiny on his breast, and “wrested from him the Tablets of Destiny, wrongfully his” and assumed his new position. Under his reign humans were created to bear the burdens of life so the gods could be at leisure.
Marduk was depicted as a human, often with his symbol the snake-dragon which he had taken over from the god Tishpak. Another symbol that stood for Marduk was the spade.
Babylonian texts talk of the creation of Eridu by the god Marduk as the first city, “the holy city, the dwelling of their [the other gods’] delight”.
Nabu, god of wisdom, is a son of Marduk.
Leonard W. King in The Seven Tablets of Creation (1902) included fragments of god lists which he considered essential for the reconstruction of the meaning of Marduk’s name. Franz Bohl in his 1936 study of the fifty names also referred to King’s list. Richard Litke (1958) noticed a similarity between Marduk’s names in the An:Anum list and those of the Enuma elish, albeit in a different arrangement. The connection between the An:Anum list and the list in Enuma Elish were established by Walther Sommerfeld (1982), who used the correspondence to argue for a Kassite period composition date of the Enuma elish, although the direct derivation of the Enuma elish list from the An:Anum one was disputed in a review by Wilfred Lambert (1984).
The Marduk Prophecy is a text describing the travels of the Marduk idol from Babylon, in which he pays a visit to the land of Ḫatti, corresponding to the statue’s seizure during the sack of the city by Mursilis I in 1531 BC, Assyria, when Tukulti-Ninurta I overthrew Kashtiliash IV in 1225 BC and took the idol to Assur, and Elam, when Kudur-nahhunte ransacked the city and pilfered the statue around 1160 BC. He addresses an assembly of the gods.
The first two sojourns are described in glowing terms as good for both Babylon and the other places Marduk has graciously agreed to visit. The episode in Elam, however, is a disaster, where the gods have followed Marduk and abandoned Babylon to famine and pestilence. Marduk prophesies that he will return once more to Babylon to a messianic new king, who will bring salvation to the city and who will wreak a terrible revenge on the Elamites. This king is understood to be Nabu-kudurri-uṣur I, 1125-1103 BC. Thereafter the text lists various sacrifices.
A copy was found in the House of the Exorcist at Assur, whose contents date from 713-612 BC and is closely related thematically to another vaticinium ex eventutext called the Shulgi prophecy, which probably followed it in a sequence of tablets. Both compositions present a favorable view of Assyria.
Poem by Jorn Boor '' In the eye of the beholder '' The path of life I will walk, slowly I will grow old Along this road I stumble, throughout the years in which I unfold Insecurity's hold me, only strong tough.. in my past before Skill & faith... I use my tool set, to build my fundamental inner core Passing phases of moving progression, through my moments of thought Life's happiness I treasure in full, it's the ingredient for which I fought I mature through life element's, painful encounters bring hard challenges for sure My mind is set on self realization, which is destined to hold ones cure. I like to run, I love to play, fight through all of my dislikes. As long as I am still aging, I stay determinate to gain insights Triggers, traps, challenges.. I won't give in, I will not be afraid. Life's disadvantages I need to handle, so in the end I can set them straight I let my inner soul control my destiny, I focus, I pay attention I'll grow responsible, I create happiness within this true intention. Birth intended I feel blessed to live, I must shine each single day I hold in mind to respect my life, I choose to live it in my own way. I stand up for all of my choices, of which I am allowed to make. Otherwise I am not able to die in peace, I can't allow that my soul is fake. Frustration towards Human Race, I feel the truth is loosing ground One day I trigger the alarm, to your convenience I will let it sound I'll be my own friend, the bond I create within will set me free Maybe it doesn't mean to you that much for now, but in the end you'll agree Hiding is the key for failure, in the end I will regret I enjoy thunder, the lightings and rain, cleansed air is the result which I expect. Faith is creating a gift we handout ourselves, it leads us towards alignment My environment is a product of me, accomplished... so i can die in contentment. Jorn Boor, Johannesburg SA Date: 26-10-11
Copyright © Jorn J.A. Boor | Year Posted 2011
We stand on the brink of a technological revolution that will fundamentally alter the way we live, work, and relate to one another. In its scale, scope, and complexity, the transformation will be unlike anything humankind has experienced before. We do not yet know just how it will unfold, but one thing is clear: the response to it must be integrated and comprehensive, involving all stakeholders of the global polity, from the public and private sectors to academia and civil society.
The First Industrial Revolution used water and steam power to mechanize production. The Second used electric power to create mass production. The Third used electronics and information technology to automate production. Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century. It is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres.
There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.
The possibilities of billions of people connected by mobile devices, with unprecedented processing power, storage capacity, and access to knowledge, are unlimited. And these possibilities will be multiplied by emerging technology breakthroughs in fields such as artificial intelligence, robotics, the Internet of Things, autonomous vehicles, 3-D printing, nanotechnology, biotechnology, materials science, energy storage, and quantum computing.
Already, artificial intelligence is all around us, from self-driving cars and drones to virtual assistants and software that translate or invest. Impressive progress has been made in AI in recent years, driven by exponential increases in computing power and by the availability of vast amounts of data, from software used to discover new drugs to algorithms used to predict our cultural interests. Digital fabrication technologies, meanwhile, are interacting with the biological world on a daily basis. Engineers, designers, and architects are combining computational design, additive manufacturing, materials engineering, and synthetic biology to pioneer a symbiosis between microorganisms, our bodies, the products we consume, and even the buildings we inhabit.
Challenges and opportunities
Like the revolutions that preceded it, the Fourth Industrial Revolution has the potential to raise global income levels and improve the quality of life for populations around the world. To date, those who have gained the most from it have been consumers able to afford and access the digital world; technology has made possible new products and services that increase the efficiency and pleasure of our personal lives. Ordering a cab, booking a flight, buying a product, making a payment, listening to music, watching a film, or playing a game—any of these can now be done remotely.
In the future, technological innovation will also lead to a supply-side miracle, with long-term gains in efficiency and productivity. Transportation and communication costs will drop, logistics and global supply chains will become more effective, and the cost of trade will diminish, all of which will open new markets and drive economic growth.
At the same time, as the economists Erik Brynjolfsson and Andrew McAfee have pointed out, the revolution could yield greater inequality, particularly in its potential to disrupt labor markets. As automation substitutes for labor across the entire economy, the net displacement of workers by machines might exacerbate the gap between returns to capital and returns to labor. On the other hand, it is also possible that the displacement of workers by technology will, in aggregate, result in a net increase in safe and rewarding jobs.
We cannot foresee at this point which scenario is likely to emerge, and history suggests that the outcome is likely to be some combination of the two. However, I am convinced of one thing—that in the future, talent, more than capital, will represent the critical factor of production. This will give rise to a job market increasingly segregated into “low-skill/low-pay” and “high-skill/high-pay” segments, which in turn will lead to an increase in social tensions.
In addition to being a key economic concern, inequality represents the greatest societal concern associated with the Fourth Industrial Revolution. The largest beneficiaries of innovation tend to be the providers of intellectual and physical capital—the innovators, shareholders, and investors—which explains the rising gap in wealth between those dependent on capital versus labor. Technology is therefore one of the main reasons why incomes have stagnated, or even decreased, for a majority of the population in high-income countries: the demand for highly skilled workers has increased while the demand for workers with less education and lower skills has decreased. The result is a job market with a strong demand at the high and low ends, but a hollowing out of the middle.
This helps explain why so many workers are disillusioned and fearful that their own real incomes and those of their children will continue to stagnate. It also helps explain why middle classes around the world are increasingly experiencing a pervasive sense of dissatisfaction and unfairness. A winner-takes-all economy that offers only limited access to the middle class is a recipe for democratic malaise and dereliction.
Discontent can also be fueled by the pervasiveness of digital technologies and the dynamics of information sharing typified by social media. More than 30 percent of the global population now uses social media platforms to connect, learn, and share information. In an ideal world, these interactions would provide an opportunity for cross-cultural understanding and cohesion. However, they can also create and propagate unrealistic expectations as to what constitutes success for an individual or a group, as well as offer opportunities for extreme ideas and ideologies to spread.
The impact on business
An underlying theme in my conversations with global CEOs and senior business executives is that the acceleration of innovation and the velocity of disruption are hard to comprehend or anticipate and that these drivers constitute a source of constant surprise, even for the best connected and most well informed. Indeed, across all industries, there is clear evidence that the technologies that underpin the Fourth Industrial Revolution are having a major impact on businesses.
On the supply side, many industries are seeing the introduction of new technologies that create entirely new ways of serving existing needs and significantly disrupt existing industry value chains. Disruption is also flowing from agile, innovative competitors who, thanks to access to global digital platforms for research, development, marketing, sales, and distribution, can oust well-established incumbents faster than ever by improving the quality, speed, or price at which value is delivered.
Major shifts on the demand side are also occurring, as growing transparency, consumer engagement, and new patterns of consumer behavior (increasingly built upon access to mobile networks and data) force companies to adapt the way they design, market, and deliver products and services.
A key trend is the development of technology-enabled platforms that combine both demand and supply to disrupt existing industry structures, such as those we see within the “sharing” or “on demand” economy. These technology platforms, rendered easy to use by the smartphone, convene people, assets, and data—thus creating entirely new ways of consuming goods and services in the process. In addition, they lower the barriers for businesses and individuals to create wealth, altering the personal and professional environments of workers. These new platform businesses are rapidly multiplying into many new services, ranging from laundry to shopping, from chores to parking, from massages to travel.
On the whole, there are four main effects that the Fourth Industrial Revolution has on business—on customer expectations, on product enhancement, on collaborative innovation, and on organizational forms. Whether consumers or businesses, customers are increasingly at the epicenter of the economy, which is all about improving how customers are served. Physical products and services, moreover, can now be enhanced with digital capabilities that increase their value. New technologies make assets more durable and resilient, while data and analytics are transforming how they are maintained. A world of customer experiences, data-based services, and asset performance through analytics, meanwhile, requires new forms of collaboration, particularly given the speed at which innovation and disruption are taking place. And the emergence of global platforms and other new business models, finally, means that talent, culture, and organizational forms will have to be rethought.
Overall, the inexorable shift from simple digitization (the Third Industrial Revolution) to innovation based on combinations of technologies (the Fourth Industrial Revolution) is forcing companies to reexamine the way they do business. The bottom line, however, is the same: business leaders and senior executives need to understand their changing environment, challenge the assumptions of their operating teams, and relentlessly and continuously innovate.
The impact on government
As the physical, digital, and biological worlds continue to converge, new technologies and platforms will increasingly enable citizens to engage with governments, voice their opinions, coordinate their efforts, and even circumvent the supervision of public authorities. Simultaneously, governments will gain new technological powers to increase their control over populations, based on pervasive surveillance systems and the ability to control digital infrastructure. On the whole, however, governments will increasingly face pressure to change their current approach to public engagement and policymaking, as their central role of conducting policy diminishes owing to new sources of competition and the redistribution and decentralization of power that new technologies make possible.
Ultimately, the ability of government systems and public authorities to adapt will determine their survival. If they prove capable of embracing a world of disruptive change, subjecting their structures to the levels of transparency and efficiency that will enable them to maintain their competitive edge, they will endure. If they cannot evolve, they will face increasing trouble.
This will be particularly true in the realm of regulation. Current systems of public policy and decision-making evolved alongside the Second Industrial Revolution, when decision-makers had time to study a specific issue and develop the necessary response or appropriate regulatory framework. The whole process was designed to be linear and mechanistic, following a strict “top down” approach.
But such an approach is no longer feasible. Given the Fourth Industrial Revolution’s rapid pace of change and broad impacts, legislators and regulators are being challenged to an unprecedented degree and for the most part are proving unable to cope.
How, then, can they preserve the interest of the consumers and the public at large while continuing to support innovation and technological development? By embracing “agile” governance, just as the private sector has increasingly adopted agile responses to software development and business operations more generally. This means regulators must continuously adapt to a new, fast-changing environment, reinventing themselves so they can truly understand what it is they are regulating. To do so, governments and regulatory agencies will need to collaborate closely with business and civil society.
The Fourth Industrial Revolution will also profoundly impact the nature of national and international security, affecting both the probability and the nature of conflict. The history of warfare and international security is the history of technological innovation, and today is no exception. Modern conflicts involving states are increasingly “hybrid” in nature, combining traditional battlefield techniques with elements previously associated with nonstate actors. The distinction between war and peace, combatant and noncombatant, and even violence and nonviolence (think cyberwarfare) is becoming uncomfortably blurry.
As this process takes place and new technologies such as autonomous or biological weapons become easier to use, individuals and small groups will increasingly join states in being capable of causing mass harm. This new vulnerability will lead to new fears. But at the same time, advances in technology will create the potential to reduce the scale or impact of violence, through the development of new modes of protection, for example, or greater precision in targeting.
The impact on people
The Fourth Industrial Revolution, finally, will change not only what we do but also who we are. It will affect our identity and all the issues associated with it: our sense of privacy, our notions of ownership, our consumption patterns, the time we devote to work and leisure, and how we develop our careers, cultivate our skills, meet people, and nurture relationships. It is already changing our health and leading to a “quantified” self, and sooner than we think it may lead to human augmentation. The list is endless because it is bound only by our imagination.
I am a great enthusiast and early adopter of technology, but sometimes I wonder whether the inexorable integration of technology in our lives could diminish some of our quintessential human capacities, such as compassion and cooperation. Our relationship with our smartphones is a case in point. Constant connection may deprive us of one of life’s most important assets: the time to pause, reflect, and engage in meaningful conversation.
One of the greatest individual challenges posed by new information technologies is privacy. We instinctively understand why it is so essential, yet the tracking and sharing of information about us is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives of the loss of control over our data will only intensify in the years ahead. Similarly, the revolutions occurring in biotechnology and AI, which are redefining what it means to be human by pushing back the current thresholds of life span, health, cognition, and capabilities, will compel us to redefine our moral and ethical boundaries.
Shaping the future
Neither technology nor the disruption that comes with it is an exogenous force over which humans have no control. All of us are responsible for guiding its evolution, in the decisions we make on a daily basis as citizens, consumers, and investors. We should thus grasp the opportunity and power we have to shape the Fourth Industrial Revolution and direct it toward a future that reflects our common objectives and values.
To do this, however, we must develop a comprehensive and globally shared view of how technology is affecting our lives and reshaping our economic, social, cultural, and human environments. There has never been a time of greater promise, or one of greater potential peril. Today’s decision-makers, however, are too often trapped in traditional, linear thinking, or too absorbed by the multiple crises demanding their attention, to think strategically about the forces of disruption and innovation shaping our future.
In the end, it all comes down to people and values. We need to shape a future that works for all of us by putting people first and empowering them. In its most pessimistic, dehumanized form, the Fourth Industrial Revolution may indeed have the potential to “robotize” humanity and thus to deprive us of our heart and soul. But as a complement to the best parts of human nature—creativity, empathy, stewardship—it can also lift humanity into a new collective and moral consciousness based on a shared sense of destiny. It is incumbent on us all to make sure the latter prevails.
This article was first published in Foreign Affairs
Author: Klaus Schwab is Founder and Executive Chairman of the World Economic Forum
Image: An Aeronavics drone sits in a paddock near the town of Raglan, New Zealand, July 6, 2015. REUTERS/Naomi Tajitsu
Written by Dylan Harper
Judging others puts you in a position of superiority that no human being can boast to have achieved. If you exercise empathy in all life situations you meet, you not only allow yourself to judge others, but deny them the much needed support to get out of their circumstances.
It is noble to want to place yourself in the shoes of another person, feel their pain and experience life from their point of view. This is what is calledempathy — a much needed trait in humanity that has the power to transform the world.
The irony of this trait is that it puts you in a position of judgment. It is involuntary judgment because the heart does not want to view the negative in the character of a person, but the negative in their circumstances.
A good person cannot be without empathy and this is how the world expects all of us to be. If you do not seem to understand what another human being is going through, you are viewed as cold and ruthless.
While empathy is meant to understand all human circumstances in the same way, we find ourselves showing the most empathy to our close friends and relatives.
If you allow yourself to feel empathy for everyone you come into contact with, you set yourself up for a bumpy ride of emotions that accompany every situation.
Empathy becomes more pronounced if you see someone going through a situation that you have found yourself in the past. It may be a situation that has still not been resolved completely and this has the power to bring back a flood of emotions.
If a situation you have been happens to another person, you think that you are better equipped to empathize with them because you have risen past the vibrations of that negative energy.
Judgment in empathy stems from the fact that you had made judgment of a situation when it happened to you and when it occurs again even if it is in another person, the same judgment will be transferred to them.
The business of judging others is not for human beings and it should be avoided at all costs. Trying to empathize with others not only lowers your emotional strength but also makes you lose your connection to divinity, which could actually be helpful in helping the other person to heal.
The human soul equates empathy to compassion which, in reality, is deciding what another person is feeling and then trying to condition your heart and mind into their same exact state.
The universe has a way of channeling positive energies where there is positive thinking. If both the victim and the empathizer were to fall into a state of lower energies and suffer as a result, then there will be no one to help the other to come out of this state.
Being too compassionate will invariably make you think too much about the problems of those that you want to help, which, in turn, will make you analyze the circumstances that led them to where they are. And, sooner or later, you will find yourself judging them of making the wrong decisions.
Not only that you don’t have the complete picture, but it’s not your business to judge them in the first place.
The only way out of this cycle is to deliberately refuse to be over compassionate, empathetic and judgmental. Don’t think about helping someone too much, just do it.
Dylan is a 31-year-old surfer from California. He traveled the world, rode the waves and learned the universal concept of oneness. He is a vegan for over a decade and, literally, wouldn’t hurt a fly. He was reunited with his twin soul in Greece, where they got married and settled… for now. Dylan is a staff writer for DreamcatcherReality.com and teaches surfing to children.
Written by Robin Lee
Keep breaking your heart until it opens. – Rumi
The word “open” has always conjured, for me, scenes of expansive space, broad horizons, reception and exploration.
What I often forget is that, in such beautiful ways, the world works in contrasts, and they are often stark. Everything is cyclical, just like the tides and the phases of the moon, we go in and out of expansion and contraction.
So too, does the heart.
Openness is often provoked by an explosion, a sudden blast of heat, a disarming presence, or a situation that leaves you without much to say. In these times, we fear darkness. We forget about contrast. We beg for the light. In practice, we know that all things are temporary; we know that there is a symbiotic relationship between our perceived pain and our desired outcomes. There are always lessons in store.
The heart is a great teacher in this way. It leads us into depths we had no idea existed so far within us, it shows us the parts we pretend are not there, and waves them in our face.
The heart has its own intelligence, its own methods for exploration and teaching. It is at once its own entity, and an integrated leader with the other systems of the body and energy. Those who are tapped most fully into their ability to love (their heart energy), radiate this outward. It magnetizes. Other people can feel it, and even if not consciously, they are changed because of it.
We are impacting each other all of the time, in waves and great starts, in words and wordless interactions, in embraces and calculated mistreatments.
The choice remains to lead with the heart over the ego, when the aim is to have the most interesting life. The richest experience. The most irrefutable aliveness. The most comprehensive humanity.
There is an insistent bravery in living when you are broken open that is hard-pressed to be found elsewhere in many instances. Love is the fuel for so much of our experience, all of it, really, and we live and die to experience this ultimate alignment. We take passionate action and enraptured planning to an all time high in the quest for this vibration, this elevating form of pure source.
What is incorrigible information underneath it all is that we needn’t work so hard. We must simply surrender. Those of us on a spiritual or yogic path tend to resonate with this word on a variety of levels, but many of us who are drawn to exploring the outer limits of ourselves are born with a fierceness that can be quite unshakeable. We love challenge – of ourselves, of our beliefs, of our bodies, we love the opportunity to grow. What this can also mean is that we love to fight, we love to embody the warrior.
We think of the warrior as being emblematic of strength in compassionate warfare, but so often, being a warrior means laying down your arms.
In the greatest sense, to awaken the open heart, to invigorate your full potential, you must let the intensity and heat of leading with your heart burn off all of the other things that are getting in the way – like the desire to struggle.
Spiritual growth is so often like taking up residence in a furnace, in a cosmic bonfire imploding in on itself, the path is the undaunted ownership of one foot in front of the other, in the most stacked of flames.
When we answer the call to live fully, we accept that this means in great joy and in great sadness. When we welcome intensity, we trust that it means nothing is going to be subtle anymore. We learn to find stillness in this, on our mats, in our meditations, with our breath – yet, still, the experience of heightened awareness and sensation is there.
It will always be there, as a reminder that we have bodies, yet we are not just bodies. We are interconnected and raw and vibrating and reaching out to each other without using our arms at all. We are walking force fields, little lightning bolts of everything imaginable, pared down and packaged into lovely vessels that can kiss and fuse.
If we choose the road less traveled, the one where we say “yes” to being wide open, we know that we are, in effect, choosing to be challenged. We are nodding our heads to being thrown and surprised as well as embraced and made warm. We are exclaiming with wide eyes and clapping our hands at the opportunity to face everything and rise.
Surrender is key. Persistence is necessary. Bravery is fuel. Courage is emblematic.
Awakening the open heart has very little to do with being undiscerning about where you throw your loving energy. It has much to do with being discerning about remaining open even when you feel like you have been grated clean and left in the rain. There is a lesson there, in that wetness.
Each experience we have is simultaneously being had or has been had by everyone around us. It is impossible to feel alone in this, in the openness of being, in letting heart lead the way, in allowing it to be the guiding light and the conjurer of our everyday comings and goings, reachings and failings.
When we choose to rouse our open heart, to awaken her, to let her beat at the front lines – we are agreeing to an explosive, extraordinary life. We are agreeing to push past what we are often told is acceptable in terms of range of feeling.
We are consciously choosing to deviate from the mundane, the restricted, the oppressed, and the self-censored. To be truly open hearted is to beat with the pulse of the divine, in light and dark, in challenge and ease, in acceptance and turmoil, and always in surrender to whatever may come.
The truly adventurous life leaves no stone unturned, and the path of the explorer begins with the commitment to grow wider between the ribs every day.
Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. So I wanted to learn as much as I could about it, and once I did that, I wanted to make sure I wrote a post that really explained this whole situation and why it matters so much. Not shockingly, that became outrageously long, so I broke it into two parts. This is Part 1—Part 2 is here.
We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge
What does it feel like to stand here?
It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. So here’s how it actually feels to stand there:
Which probably feels pretty normal…
Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. When you get there, you retrieve a dude, bring him to 2015, and then walk him around and watch him react to everything. It’s impossible for us to understand what it would be like for him to see shiny capsules racing by on a highway, talk to people who had been on the other side of the ocean earlier in the day, watch sports that were being played 1,000 miles away, hear a musical performance that happened 50 years ago, and play with my magical wizard rectangle that he could use to capture a real-life image or record a living moment, generate a map with a paranormal moving blue dot that shows him where he is, look at someone’s face and chat with them even though they’re on the other side of the country, and worlds of other inconceivable sorcery. This is all before you show him the internet or explain things like the International Space Station, the Large Hadron Collider, nuclear weapons, or general relativity.
This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. He might actually die.
But here’s the interesting thing—if he then went back to 1750 and got jealous that we got to see his reaction and decided he wanted to try the same thing, he’d take the time machine and go back the same distance, get someone from around the year 1500, bring him to 1750, and show him everything. And the 1500 guy would be shocked by a lot of things—but he wouldn’t die. It would be far less of an insane experience for him, because while 1500 and 1750 were very different, they were much lessdifferent than 1750 to 2015. The 1500 guy would learn some mind-bending shit about space and physics, he’d be impressed with how committed Europe turned out to be with that new imperialism fad, and he’d have to do some major revisions of his world map conception. But watching everyday life go by in 1750—transportation, communication, etc.—definitely wouldn’t make him die.
No, in order for the 1750 guy to have as much fun as we had with him, he’d have to go much farther back—maybe all the way back to about 12,000 BC, before the First Agricultural Revolution gave rise to the first cities and to the concept of civilization. If someone from a purely hunter-gatherer world—from a time when humans were, more or less, just another animal species—saw the vast human empires of 1750 with their towering churches, their ocean-crossing ships, their concept of being “inside,” and their enormous mountain of collective, accumulated human knowledge and discovery—he’d likely die.
And then what if, after dying, he got jealous and wanted to do the same thing. If he went back 12,000 years to 24,000 BC and got a guy and brought him to 12,000 BC, he’d show the guy everything and the guy would be like, “Okay what’s your point who cares.” For the 12,000 BC guy to have the same fun, he’d have to go back over 100,000 years and get someone he could show fire and language to for the first time.
In order for someone to be transported into the future and die from the level of shock they’d experience, they have to go enough years ahead that a “die level of progress,” or a Die Progress Unit (DPU) has been achieved. So a DPU took over 100,000 years in hunter-gatherer times, but at the post-Agricultural Revolution rate, it only took about 12,000 years. The post-Industrial Revolution world has moved so quickly that a 1750 person only needs to go forward a couple hundred years for a DPU to have happened.
This pattern—human progress moving quicker and quicker as time goes on—is what futurist Ray Kurzweil calls human history’s Law of Accelerating Returns. This happens because more advanced societies have the ability to progress at a faster rate than less advanced societies—because they’re more advanced. 19th century humanity knew more and had better technology than 15th century humanity, so it’s no surprise that humanity made far more advances in the 19th century than in the 15th century—15th century humanity was no match for 19th century humanity.11← open these
This works on smaller scales too. The movie Back to the Future came out in 1985, and “the past” took place in 1955. In the movie, when Michael J. Fox went back to 1955, he was caught off-guard by the newness of TVs, the prices of soda, the lack of love for shrill electric guitar, and the variation in slang. It was a different world, yes—but if the movie were made today and the past took place in 1985, the movie could have had much more fun with much bigger differences. The character would be in a time before personal computers, internet, or cell phones—today’s Marty McFly, a teenager born in the late 90s, would be much more out of place in 1985 than the movie’s Marty McFly was in 1955.
This is for the same reason we just discussed—the Law of Accelerating Returns. The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985—because the former was a more advanced world—so much more change happened in the most recent 30 years than in the prior 30.
So—advances are getting bigger and bigger and happening more and more quickly. This suggests some pretty intense things about our future, right?
Kurzweil suggests that the progress of the entire 20th century would have been achieved in only 20 years at the rate of advancement in the year 2000—in other words, by 2000, the rate of progress was five times faster than the average rate of progress during the 20th century. He believes another 20th century’s worth of progress happened between 2000 and 2014 and that another 20th century’s worth of progress will happen by 2021, in only seven years. A couple decades later, he believes a 20th century’s worth of progress will happen multiple times in the same year, and even later, in less than one month. All in all, because of the Law of Accelerating Returns, Kurzweil believes that the 21st century will achieve 1,000 times the progress of the 20th century.2
If Kurzweil and others who agree with him are correct, then we may be as blown away by 2030 as our 1750 guy was by 2015—i.e. the next DPU might only take a couple decades—and the world in 2050 might be so vastly different than today’s world that we would barely recognize it.
This isn’t science fiction. It’s what many scientists smarter and more knowledgeable than you or I firmly believe—and if you look at history, it’s what we should logically predict.
So then why, when you hear me say something like “the world 35 years from now might be totally unrecognizable,” are you thinking, “Cool….but nahhhhhhh”? Three reasons we’re skeptical of outlandish forecasts of the future:
1) When it comes to history, we think in straight lines. When we imagine the progress of the next 30 years, we look back to the progress of the previous 30 as an indicator of how much will likely happen. When we think about the extent to which the world will change in the 21st century, we just take the 20th century progress and add it to the year 2000. This was the same mistake our 1750 guy made when he got someone from 1500 and expected to blow his mind as much as his own was blown going the same distance ahead. It’s most intuitive for us to think linearly, when we should be thinkingexponentially. If someone is being more clever about it, they might predict the advances of the next 30 years not by looking at the previous 30 years, but by taking the current rate of progress and judging based on that. They’d be more accurate, but still way off. In order to think about the future correctly, you need to imagine things moving at a much faster rate than they’re moving now.
2) The trajectory of very recent history often tells a distorted story. First, even a steep exponential curve seems linear when you only look at a tiny slice of it, the same way if you look at a little segment of a huge circle up close, it looks almost like a straight line. Second, exponential growth isn’t totally smooth and uniform. Kurzweil explains that progress happens in “S-curves”:
An S is created by the wave of progress when a new paradigm sweeps the world. The curve goes through three phases:
1. Slow growth (the early phase of exponential growth)
2. Rapid growth (the late, explosive phase of exponential growth)
3. A leveling off as the particular paradigm matures3
If you look only at very recent history, the part of the S-curve you’re on at the moment can obscure your perception of how fast things are advancing. The chunk of time between 1995 and 2007 saw the explosion of the internet, the introduction of Microsoft, Google, and Facebook into the public consciousness, the birth of social networking, and the introduction of cell phones and then smart phones. That was Phase 2: the growth spurt part of the S. But 2008 to 2015 has been less groundbreaking, at least on the technological front. Someone thinking about the future today might examine the last few years to gauge the current rate of advancement, but that’s missing the bigger picture. In fact, a new, huge Phase 2 growth spurt might be brewing right now.
3) Our own experience makes us stubborn old men about the future. We base our ideas about the world on our personal experience, and that experience has ingrained the rate of growth of the recent past in our heads as “the way things happen.” We’re also limited by our imagination, which takes our experience and uses it to conjure future predictions—but often, what we know simply doesn’t give us the tools to think accurately about the future.2 When we hear a prediction about the future that contradicts our experience-based notion of how things work, our instinct is that the prediction must be naive. If I tell you, later in this post, that you may live to be 150, or 250, or not die at all, your instinct will be, “That’s stupid—if there’s one thing I know from history, it’s that everybody dies.” And yes, no one in the past has not died. But no one flew airplanes before airplanes were invented either.
So while nahhhhh might feel right as you read this post, it’s probably actually wrong. The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, much, much more should change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth. And if you spend some time reading about what’s going on today in science and technology, you start to see a lot of signs quietly hinting that life as we currently know it cannot withstand the leap that’s coming next.
If you’re like me, you used to think Artificial Intelligence was a silly sci-fi concept, but lately you’ve been hearing it mentioned by serious people, and you don’t really quite get it.
There are three reasons a lot of people are confused about the term AI:
1)We associate AI with movies. Star Wars. Terminator. 2001: A Space Odyssey. Even the Jetsons. And those are fiction, as are the robot characters. So it makes AI sound a little fictional to us.
2) AI is a broad topic. It ranges from your phone’s calculator to self-driving cars to something in the future that might change the world dramatically. AI refers to all of these things, which is confusing.
3) We use AI all the time in our daily lives, but we often don’t realize it’s AI. John McCarthy, who coined the term “Artificial Intelligence” in 1956, complained that “as soon as it works, no one calls it AI anymore.”4 Because of this phenomenon, AI often sounds like a mythical future prediction more than a reality. At the same time, it makes it sound like a pop concept from the past that never came to fruition. Ray Kurzweil says he hears people say that AI withered in the 1980s, which he compares to “insisting that the Internet died in the dot-com bust of the early 2000s.”5
So let’s clear things up. First, stop thinking of robots. A robot is a container for AI, sometimes mimicking the human form, sometimes not—but the AI itself is the computer inside the robot. AI is the brain, and the robot is its body—if it even has a body. For example, the software and data behind Siri is AI, the woman’s voice we hear is a personification of that AI, and there’s no robot involved at all.
Secondly, you’ve probably heard the term “singularity” or “technological singularity.” This term has been used in math to describe an asymptote-like situation where normal rules no longer apply. It’s been used in physics to describe a phenomenon like an infinitely small, dense black hole or the point we were all squished into right before the Big Bang. Again, situations where the usual rules don’t apply. In 1993, Vernor Vinge wrote a famous essay in which he applied the term to the moment in the future when our technology’s intelligence exceeds our own—a moment for him when life as we know it will be forever changed and normal rules will no longer apply. Ray Kurzweil then muddled things a bit by defining the singularity as the time when the Law of Accelerating Returns has reached such an extreme pace that technological progress is happening at a seemingly-infinite pace, and after which we’ll be living in a whole new world. I found that many of today’s AI thinkers have stopped using the term, and it’s confusing anyway, so I won’t use it much here (even though we’ll be focusing on that ideathroughout).
Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:
AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.
AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is amuch harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.
AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.
As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road we may or may not survive but that, either way, will change everything.
Let’s take a close look at what the leading thinkers in the field believe this road looks like and why this revolution might happen way sooner than you might think:
Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing. A few examples:
ANI systems as they are now aren’t especially scary. At worst, a glitchy or badly-programmed ANI can cause an isolated catastrophe like knocking out a power grid, causing a harmful nuclear power plant malfunction, or triggering a financial markets disaster (like the 2010 Flash Crash when an ANI program reacted the wrong way to an unexpected situation and caused the stock market to briefly plummet, taking $1 trillion of market value with it, only part of which was recovered when the mistake was corrected).
But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively-harmless ANI as a precursor of the world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.
Why It’s So Hard
Nothing will make you appreciate human intelligence like learning about how unbelievably challenging it is to try to create a computer as smart as we are. Building skyscrapers, putting humans in space, figuring out the details of how the Big Bang went down—all far easier than understanding our own brain or how to make something as cool as it. As of now, the human brain is the most complex object in the known universe.
What’s interesting is that the hard parts of trying to build AGI (a computer as smart as humans ingeneral, not just at one narrow specialty) are not intuitively what you’d think they are. Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it. Or, as computer scientist Donald Knuth puts it, “AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.’”7
What you quickly realize when you think about this is that those things that seem easy to us are actually unbelievably complicated, and they only seem easy because those skills have been optimized in us (and most animals) by hundreds of millions of years of animal evolution. When you reach your hand up toward an object, the muscles, tendons, and bones in your shoulder, elbow, and wrist instantly perform a long series of physics operations, in conjunction with your eyes, to allow you to move your hand in a straight line through three dimensions. It seems effortless to you because you have perfected software in your brain for doing it. Same idea goes for why it’s not that malware is dumb for not being able to figure out the slanty word recognition test when you sign up for a new account on a site—it’s that your brain is super impressive for being able to.
On the other hand, multiplying big numbers or playing chess are new activities for biological creatures and we haven’t had any time to evolve a proficiency at them, so a computer doesn’t need to work too hard to beat us. Think about it—which would you rather do, build a program that could multiply big numbers or one that could understand the essence of a B well enough that you could show it a B in any one of thousands of unpredictable fonts or handwriting and it could instantly know it was a B?
One fun example—when you look at this, you and a computer both can figure out that it’s a rectangle with two distinct shades, alternating:
Tied so far. But if you pick up the black and reveal the whole image…
…you have no problem giving a full description of the various opaque and translucent cylinders, slats, and 3-D corners, but the computer would fail miserably. It would describe what it sees—a variety of two-dimensional shapes in several different shades—which is actually what’s there. Your brain is doing a ton of fancy shit to interpret the implied depth, shade-mixing, and room lighting the picture is trying to portray.8 And looking at the picture below, a computer sees a two-dimensional white, black, and gray collage, while you easily see what it really is—a photo of an entirely-black, 3-D rock:
And everything we just mentioned is still only taking in stagnant information and processing it. To be human-level intelligent, a computer would have to understand things like the difference between subtle facial expressions, the distinction between being pleased, relieved, content, satisfied, and glad, and why Braveheart was great but The Patriot was terrible.
So how do we get there?
First Key to Creating AGI: Increasing Computational Power
One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, it’ll need to equal the brain’s raw computing capacity.
One way to express this capacity is in the total calculations per second (cps) the brain could manage, and you could come to this number by figuring out the maximum cps of each structure in the brain and then adding them all together.
Ray Kurzweil came up with a shortcut by taking someone’s professional estimate for the cps of one structure and that structure’s weight compared to that of the whole brain and then multiplying proportionally to get an estimate for the total. Sounds a little iffy, but he did this a bunch of times with various professional estimates of different regions, and the total always arrived in the same ballpark—around 1016, or 10 quadrillion cps.
Currently, the world’s fastest supercomputer, China’s Tianhe-2, has actually beaten that number, clocking in at about 34 quadrillion cps. But Tianhe-2 is also a dick, taking up 720 square meters of space, using 24 megawatts of power (the brain runs on just 20 watts), and costing $390 million to build. Not especially applicable to wide usage, or even most commercial or industrial usage yet.
Kurzweil suggests that we think about the state of computers by looking at how many cps you can buy for $1,000. When that number reaches human-level—10 quadrillion cps—then that’ll mean AGI could become a very real part of life.
Moore’s Law is a historically-reliable rule that the world’s maximum computing power doubles approximately every two years, meaning computer hardware advancement, like general human advancement through history, grows exponentially. Looking at how this relates to Kurzweil’s cps/$1,000 metric, we’re currently at about 10 trillion cps/$1,000, right on pace with this graph’s predicted trajectory:9
So the world’s $1,000 computers are now beating the mouse brain and they’re at about a thousandth of human level. This doesn’t sound like much until you remember that we were at about a trillionth of human level in 1985, a billionth in 1995, and a millionth in 2005. Being at a thousandth in 2015 puts us right on pace to get to an affordable computer by 2025 that rivals the power of the brain.
So on the hardware side, the raw power needed for AGI is technically available now, in China, and we’ll be ready for affordable, widespread AGI-caliber hardware within 10 years. But raw computational power alone doesn’t make a computer generally intelligent—the next question is, how do we bring human-level intelligence to all that power?
Second Key to Creating AGI: Making It Smart
This is the icky part. The truth is, no one really knows how to make it smart—we’re still debating how to make a computer human-level intelligent and capable of knowing what a dog and a weird-written B and a mediocre movie is. But there are a bunch of far-fetched strategies out there and at some point, one of them will work. Here are the three most common strategies I came across:
This is like scientists toiling over how that kid who sits next to them in class is so smart and keeps doing so well on the tests, and even though they keep studying diligently, they can’t do nearly as well as that kid, and then they finally decide “k fuck it I’m just gonna copy that kid’s answers.” It makes sense—we’re stumped trying to build a super-complex computer, and there happens to be a perfect prototype for one in each of our heads.
The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing—optimistic estimates say we can do this by 2030. Once we do that, we’ll know all the secrets of how the brain runs so powerfully and efficiently and we can draw inspiration from it and steal its innovations. One example of computer architecture that mimics the brain is the artificial neural network. It starts out as a network of transistor “neurons,” connected to each other with inputs and outputs, and it knows nothing—like an infant brain. The way it “learns” is it tries to do a task, say handwriting recognition, and at first, its neural firings and subsequent guesses at deciphering each letter will be completely random. But when it’s told it got something right, the transistor connections in the firing pathways that happened to create that answer are strengthened; when it’s told it was wrong, those pathways’ connections are weakened. After a lot of this trial and feedback, the network has, by itself, formed smart neural pathways and the machine has become optimized for the task. The brain learns a bit like this but in a more sophisticated way, and as we continue to study the brain, we’re discovering ingenious new ways to take advantage of neural circuitry.
More extreme plagiarism involves a strategy called “whole brain emulation,” where the goal is to slice a real brain into thin layers, scan each one, use software to assemble an accurate reconstructed 3-D model, and then implement the model on a powerful computer. We’d then have a computer officially capable of everything the brain is capable of—it would just need to learn and gather information. If engineers get really good, they’d be able to emulate a real brain with such exact accuracy that the brain’s full personality and memory would be intact once the brain architecture has been uploaded to a computer. If the brain belonged to Jim right before he passed away, the computer would now wake up as Jim (?), which would be a robust human-level AGI, and we could now work on turning Jim into an unimaginably smart ASI, which he’d probably be really excited about.
How far are we from achieving whole brain emulation? Well so far, we’ve not yet just recently been able to emulate a 1mm-long flatworm brain, which consists of just 302 total neurons. The human brain contains 100 billion. If that makes it seem like a hopeless project, remember the power of exponential progress—now that we’ve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.
So if we decide the smart kid’s test is too hard to copy, we can try to copy the way he studies for the tests instead.
Here’s something we know. Building a computer as powerful as the brain is possible—our own brain’s evolution is proof. And if the brain is just too complex for us to emulate, we could try to emulateevolution instead. The fact is, even if we can emulate a brain, that might be like trying to build an airplane by copying a bird’s wing-flapping motions—often, machines are best designed using a fresh, machine-oriented approach, not by mimicking biology exactly.
So how can we simulate evolution to build AGI? The method, called “genetic algorithms,” would work something like this: there would be a performance-and-evaluation process that would happen again and again (the same way biological creatures “perform” by living life and are “evaluated” by whether they manage to reproduce or not). A group of computers would try to do tasks, and the most successful ones would be bred with each other by having half of each of their programming merged together into a new computer. The less successful ones would be eliminated. Over many, many iterations, this natural selection process would produce better and better computers. The challenge would be creating an automated evaluation and breeding cycle so this evolution process could run on its own.
The downside of copying evolution is that evolution likes to take a billion years to do things and we want to do this in a few decades.
But we have a lot of advantages over evolution. First, evolution has no foresight and works randomly—it produces more unhelpful mutations than helpful ones, but we would control the process so it would only be driven by beneficial glitches and targeted tweaks. Secondly, evolution doesn’t aim for anything, including intelligence—sometimes an environment might even select against higher intelligence (since it uses a lot of energy). We, on the other hand, could specifically direct this evolutionary process toward increasing intelligence. Third, to select for intelligence, evolution has to innovate in a bunch of other ways to facilitate intelligence—like revamping the ways cells produce energy—when we can remove those extra burdens and use things like electricity. It’s no doubt we’d be much, much faster than evolution—but it’s still not clear whether we’ll be able to improve upon evolution enough to make this a viable strategy.
This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.
The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter. More on this later.
Rapid advancements in hardware and innovative experimentation with software are happening simultaneously, and AGI could creep up on us quickly and unexpectedly for two main reasons:
1) Exponential growth is intense and what seems like a snail’s pace of advancement can quickly race upwards—this GIF illustrates this concept nicely:
2) When it comes to software, progress can seem slow, but then one epiphany can instantly change the rate of advancement (kind of like the way science, during the time humans thought the universe was geocentric, was having difficulty calculating how the universe worked, but then the discovery that it was heliocentric suddenly made everything much easier). Or, when it comes to something like a computer that improves itself, we might seem far away but actually be just one tweak of the system away from having it become 1,000 times more effective and zooming upward to human-level intelligence.
At some point, we’ll have achieved AGI—computers with human-level general intelligence. Just a bunch of people and computers living together in equality.
Oh actually not at all.
The thing is, AGI with an identical level of intelligence and computational capacity as a human would still have significant advantages over humans. Like:
AI, which will likely get to AGI by being programmed to self-improve, wouldn’t see “human-level intelligence” as some important milestone—it’s only a relevant marker from our point of view—and wouldn’t have any reason to “stop” at our level. And given the advantages over us that even human intelligence-equivalent AGI would have, it’s pretty obvious that it would only hit human intelligence for a brief instant before racing onwards to the realm of superior-to-human intelligence.
This may shock the shit out of us when it happens. The reason is that from our perspective, A) while the intelligence of different kinds of animals varies, the main characteristic we’re aware of about any animal’s intelligence is that it’s far lower than ours, and B) we view the smartest humans as WAY smarter than the dumbest humans. Kind of like this:
So as AI zooms upward in intelligence toward us, we’ll see it as simply becoming smarter, for an animal.Then, when it hits the lowest capacity of humanity—Nick Bostrom uses the term “the village idiot”—we’ll be like, “Oh wow, it’s like a dumb human. Cute!” The only thing is, in the grand spectrum of intelligence, all humans, from the village idiot to Einstein, are within a very small range—so just after hitting village idiot level and being declared to be AGI, it’ll suddenly be smarter than Einstein and we won’t know what hit us:
And what happens…after that?
An Intelligence Explosion
I hope you enjoyed normal time, because this is when this topic gets unnormal and scary, and it’s gonna stay that way from here forward. I want to pause here to remind you that every single thing I’m going to say is real—real science and real forecasts of the future from a large array of the most respected thinkers and scientists. Just keep remembering that.
Anyway, as I said above, most of our current models for getting to AGI involve the AI getting there by self-improvement. And once it gets to AGI, even systems that formed and grew through methods that didn’t involve self-improvement would now be smart enough to begin self-improving if they wanted to.3
And here’s where we get to an intense concept: recursive self-improvement. It works like this—
An AI system at a certain level—let’s say human village idiot—is programmed with the goal of improving its own intelligence. Once it does, it’s smarter—maybe at this point it’s at Einstein’s level—so now when it works to improve its intelligence, with an Einstein-level intellect, it has an easier time and it can make bigger leaps. These leaps make it much smarter than any human, allowing it to make evenbigger leaps. As the leaps grow larger and happen more rapidly, the AGI soars upwards in intelligence and soon reaches the superintelligent level of an ASI system. This is called an Intelligence Explosion,11and it’s the ultimate example of The Law of Accelerating Returns.
There is some debate about how soon AI will reach human-level general intelligence. The median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly. Like—this could happen:
It takes decades for the first AI system to reach low-level general intelligence, but it finally happens. A computer is able to understand the world around it as well as a human four-year-old. Suddenly, within an hour of hitting that milestone, the system pumps out the grand theory of physics that unifies general relativity and quantum mechanics, something no human has been able to definitively do. 90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.
Superintelligence of that magnitude is not something we can remotely grasp, any more than a bumblebee can wrap its head around Keynesian Economics. In our world, smart means a 130 IQ and stupid means an 85 IQ—we don’t have a word for an IQ of 12,952.
What we do know is that humans’ utter dominance on this Earth suggests a clear rule: with intelligence comes power. Which means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim—and this might happenin the next few decades.
If our meager brains were able to invent wifi, then something 100 or 1,000 or 1 billion times smarter than we are should have no problem controlling the positioning of each and every atom in the world in any way it likes, at any time—everything we consider magic, every power we imagine a supreme God to have will be as mundane an activity for the ASI as flipping on a light switch is for us. Creating the technology to reverse human aging, curing disease and hunger and even mortality, reprogramming the weather to protect the future of life on Earth—all suddenly possible. Also possible is the immediate end of all life on Earth. As far as we’re concerned, if an ASI comes to being, there is now an omnipotent God on Earth—and the all-important question for us is:
Will it be a nice God?
That’s the topic of Part 2 of this post.
Sources at the bottom of Part 2.
Related Wait But Why Posts
The Fermi Paradox – Why don’t we see any signs of alien life?
How (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.
Or for something totally different and yet somehow related, Why Procrastinators Procrastinate
And here’s Year 1 of Wait But Why on an ebook.
Note: This is Part 2 of a two-part series on AI. Part 1 is here.
We have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends. — Nick Bostrom
Welcome to Part 2 of the “Wait how is this possibly what I’m reading I don’t get why everyone isn’t talking about this” series.
Part 1 started innocently enough, as we discussed Artificial Narrow Intelligence, or ANI (AI that specializes in one narrow task like coming up with driving routes or playing chess), and how it’s all around us in the world today. We then examined why it was such a huge challenge to get from ANI to Artificial General Intelligence, or AGI (AI that’s at least as intellectually capable as a human, across the board), and we discussed why the exponential rate of technological advancement we’ve seen in the past suggests that AGI might not be as far away as it seems. Part 1 ended with me assaulting you with the fact that once our machines reach human-level intelligence, they might immediately do this:
This left us staring at the screen, confronting the intense concept of potentially-in-our-lifetime Artificial Superintelligence, or ASI (AI that’s way smarter than any human, across the board), and trying to figure out which emotion we were supposed to have on as we thought about that.11← open these
Before we dive into things, let’s remind ourselves what it would mean for a machine to be superintelligent.
A key distinction is the difference between speed superintelligence and quality superintelligence. Often, someone’s first thought when they imagine a super-smart computer is one that’s as intelligent as a human but can think much, much faster2—they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.
That sounds impressive, and ASI would think much faster than any human could—but the true separator would be its advantage in intelligence quality, which is something completely different. What makes humans so much more intellectually capable than chimps isn’t a difference in thinking speed—it’s that human brains contain a number of sophisticated cognitive modules that enable things like complex linguistic representations or longterm planning or abstract reasoning, that chimps’ brains do not. Speeding up a chimp’s brain by thousands of times wouldn’t bring him to our level—even with a decade’s time, he wouldn’t be able to figure out how to use a set of custom tools to assemble an intricate model, something a human could knock out in a few hours. There are worlds of human cognitive function a chimp will simply never be capable of, no matter how much time he spends trying.
But it’s not just that a chimp can’t do what we do, it’s that his brain is unable to grasp that those worlds even exist—a chimp can become familiar with what a human is and what a skyscraper is, but he’ll never be able to understand that the skyscraper was built by humans. In his world, anything that huge is part of nature, period, and not only is it beyond him to build a skyscraper, it’s beyond him to realize thatanyone can build a skyscraper. That’s the result of a small difference in intelligence quality.
And in the scheme of the intelligence range we’re talking about today, or even the much smaller range among biological creatures, the chimp-to-human quality intelligence gap is tiny. In an earlier post, I depicted the range of biological cognitive capacity using a staircase:3
To absorb how big a deal a superintelligent machine would be, imagine one on the dark green step two steps above humans on that staircase. This machine would be only slightly superintelligent, but its increased cognitive ability over us would be as vast as the chimp-human gap we just described. And like the chimp’s incapacity to ever absorb that skyscrapers can be built, we will never be able to even comprehend the things a machine on the dark green step can do, even if the machine tried to explain it to us—let alone do it ourselves. And that’s only two steps above us. A machine on the second-to-highest step on that staircase would be to us as we are to ants—it could try for years to teach us the simplest inkling of what it knows and the endeavor would be hopeless.
But the kind of superintelligence we’re talking about today is something far beyond anything on this staircase. In an intelligence explosion—where the smarter a machine gets, the quicker it’s able to increase its own intelligence, until it begins to soar upwards—a machine might take years to rise from the chimp step to the one above it, but perhaps only hours to jump up a step once it’s on the dark green step two above us, and by the time it’s ten steps above us, it might be jumping up in four-step leaps every second that goes by. Which is why we need to realize that it’s distinctly possible that very shortly after the big news story about the first machine reaching human-level AGI, we might be facing the reality of coexisting on the Earth with something that’s here on the staircase (or maybe a million times higher):
And since we just established that it’s a hopeless activity to try to understand the power of a machine only two steps above us, let’s very concretely state once and for all that there is no way to know what ASI will do or what the consequences will be for us. Anyone who pretends otherwise doesn’t understand what superintelligence means.
Evolution has advanced the biological brain slowly and gradually over hundreds of millions of years, and in that sense, if humans birth an ASI machine, we’ll be dramatically stomping on evolution. Or maybe this is part of evolution—maybe the way evolution works is that intelligence creeps up more and more until it hits the level where it’s capable of creating machine superintelligence, and that level is like a tripwire that triggers a worldwide game-changing explosion that determines a new future for all living things:
And for reasons we’ll discuss later, a huge part of the scientific community believes that it’s not a matter of whether we’ll hit that tripwire, but when. Kind of a crazy piece of information.
So where does that leave us?
Well no one in the world, especially not I, can tell you what will happen when we hit the tripwire. But Oxford philosopher and lead AI thinker Nick Bostrom believes we can boil down all potential outcomes into two broad categories.
First, looking at history, we can see that life works like this: species pop up, exist for a while, and after some time, inevitably, they fall off the existence balance beam and land on extinction—
“All species eventually go extinct” has been almost as reliable a rule through history as “All humans eventually die” has been. So far, 99.9% of species have fallen off the balance beam, and it seems pretty clear that if a species keeps wobbling along down the beam, it’s only a matter of time before some other species, some gust of nature’s wind, or a sudden beam-shaking asteroid knocks it off. Bostrom calls extinction an attractor state—a place species are all teetering on falling into and from which no species ever returns.
And while most scientists I’ve come across acknowledge that ASI would have the ability to send humans to extinction, many also believe that used beneficially, ASI’s abilities could be used to bring individual humans, and the species as a whole, to a second attractor state—species immortality. Bostrom believes species immortality is just as much of an attractor state as species extinction, i.e. if we manage to get there, we’ll be impervious to extinction forever—we’ll have conquered mortality and conquered chance. So even though all species so far have fallen off the balance beam and landed on extinction, Bostrom believes there are two sides to the beam and it’s just that nothing on Earth has been intelligent enough yet to figure out how to fall off on the other side.
If Bostrom and others are right, and from everything I’ve read, it seems like they really might be, we have two pretty shocking facts to absorb:
1) The advent of ASI will, for the first time, open up the possibility for a species to land on the immortality side of the balance beam.
2) The advent of ASI will make such an unimaginably dramatic impact that it’s likely to knock the human race off the beam, in one direction or the other.
It may very well be that when evolution hits the tripwire, it permanently ends humans’ relationship with the beam and creates a new world, with or without humans.
Kind of seems like the only question any human should currently be asking is: When are we going to hit the tripwire and which side of the beam will we land on when that happens?
No one in the world knows the answer to either part of that question, but a lot of the very smartest people have put decades of thought into it. We’ll spend the rest of this post exploring what they’ve come up with.
Let’s start with the first part of the question: When are we going to hit the tripwire?
i.e. How long until the first machine reaches superintelligence?
Not shockingly, opinions vary wildly and this is a heated debate among scientists and thinkers. Many, like professor Vernor Vinge, scientist Ben Goertzel, Sun Microsystems co-founder Bill Joy, or, most famously, inventor and futurist Ray Kurzweil, agree with machine learning expert Jeremy Howard when he puts up this graph during a TED Talk:
Those people subscribe to the belief that this is happening soon—that exponential growth is at work and machine learning, though only slowly creeping up on us now, will blow right past us within the next few decades.
Others, like Microsoft co-founder Paul Allen, research psychologist Gary Marcus, NYU computer scientist Ernest Davis, and tech entrepreneur Mitch Kapor, believe that thinkers like Kurzweil are vastly underestimating the magnitude of the challenge and believe that we’re not actually that close to the tripwire.
The Kurzweil camp would counter that the only underestimating that’s happening is the underappreciation of exponential growth, and they’d compare the doubters to those who looked at the slow-growing seedling of the internet in 1985 and argued that there was no way it would amount to anything impactful in the near future.
The doubters might argue back that the progress needed to make advancements in intelligence alsogrows exponentially harder with each subsequent step, which will cancel out the typical exponential nature of technological progress. And so on.
A third camp, which includes Nick Bostrom, believes neither group has any ground to feel certain about the timeline and acknowledges both A) that this could absolutely happen in the near future and B) that there’s no guarantee about that; it could also take a much longer time.
Still others, like philosopher Hubert Dreyfus, believe all three of these groups are naive for believing that there even is a tripwire, arguing that it’s more likely that ASI won’t actually ever be achieved.
So what do you get when you put all of these opinions together?
In 2013, Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such HLMI4 to exist?” It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI—i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:2
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts, is almost certain AGI will happen within your lifetime.
A separate study, conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference, did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:3
By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.
But AGI isn’t the tripwire, ASI is. So when do the experts think we’ll reach ASI?
Müller and Bostrom also asked the experts how likely they think it is that we’ll reach ASI A) within two years of reaching AGI (i.e. an almost-immediate intelligence explosion), and B) within 30 years. The results:4
The median answer put a rapid (2 year) AGI → ASI transition at only a 10% likelihood, but a longer transition of 30 years or less at a 75% likelihood.
We don’t know from this data the length of this transition the median participant would have put at a 50% likelihood, but for ballpark purposes, based on the two answers above, let’s estimate that they’d have said 20 years. So the median opinion—the one right in the center of the world of AI experts—believes the most realistic guess for when we’ll hit the ASI tripwire is [the 2040 prediction for AGI + our estimated prediction of a 20-year transition from AGI to ASI] = 2060.
Of course, all of the above statistics are speculative, and they’re only representative of the center opinion of the AI expert community, but it tells us that a large portion of the people who know the most about this topic would agree that 2060 is a very reasonable estimate for the arrival of potentially world-altering ASI. Only 45 years from now.
Okay now how about the second part of the question above: When we hit the tripwire, which side of the beam will we fall to?
Superintelligence will yield tremendous power—the critical question for us is:
Who or what will be in control of that power, and what will their motivation be?
The answer to this will determine whether ASI is an unbelievably great development, an unfathomably terrible development, or something in between.
Of course, the expert community is again all over the board and in a heated debate about the answer to this question. Müller and Bostrom’s survey asked participants to assign a probability to the possible impacts AGI would have on humanity and found that the mean response was that there was a 52% chance that the outcome will be either good or extremely good and a 31% chance the outcome will be either bad or extremely bad. For a relatively neutral outcome, the mean probability was only 17%. In other words, the people who know the most about this are pretty sure this will be a huge deal. It’s also worth noting that those numbers refer to the advent of AGI—if the question were about ASI, I imagine that the neutral percentage would be even lower.
Before we dive much further into this good vs. bad outcome part of the question, let’s combine both the “when will it happen?” and the “will it be good or bad?” parts of this question into a chart that encompasses the views of most of the relevant experts:
We’ll talk more about the Main Camp in a minute, but first—what’s your deal? Actually I know what your deal is, because it was my deal too before I started researching this topic. Some reasons most people aren’t really thinking about this topic:
One of the goals of these two posts is to get you out of the I Like to Think About Other Things Camp and into one of the expert camps, even if you’re just standing on the intersection of the two dotted lines in the square above, totally uncertain.
During my research, I came across dozens of varying opinions on this topic, but I quickly noticed that most people’s opinions fell somewhere in what I labeled the Main Camp, and in particular, over three quarters of the experts fell into two Subcamps inside the Main Camp:
We’re gonna take a thorough dive into both of these camps. Let’s start with the fun one—
As I learned about the world of AI, I found a surprisingly large number of people standing here:
The people on Confident Corner are buzzing with excitement. They have their sights set on the fun side of the balance beam and they’re convinced that’s where all of us are headed. For them, the future is everything they ever could have hoped for, just in time.
The thing that separates these people from the other thinkers we’ll discuss later isn’t their lust for the happy side of the beam—it’s their confidence that that’s the side we’re going to land on.
Where this confidence comes from is up for debate. Critics believe it comes from an excitement so blinding that they simply ignore or deny potential negative outcomes. But the believers say it’s naive to conjure up doomsday scenarios when on balance, technology has and will likely end up continuing to help us a lot more than it hurts us.
We’ll cover both sides, and you can form your own opinion about this as you read, but for this section, put your skepticism away and let’s take a good hard look at what’s over there on the fun side of the balance beam—and try to absorb the fact that the things you’re reading might really happen. If you had shown a hunter-gatherer our world of indoor comfort, technology, and endless abundance, it would have seemed like fictional magic to him—we have to be humble enough to acknowledge that it’spossible that an equally inconceivable transformation could be in our future.
Nick Bostrom describes three ways a superintelligent AI system could function:6
These questions and tasks, which seem complicated to us, would sound to a superintelligent system like someone asking you to improve upon the “My pencil fell off the table” situation, which you’d do by picking it up and putting it back on the table.
Eliezer Yudkowsky, a resident of Anxious Avenue in our chart above, said it well:
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards [in level of intelligence], and some problems will suddenly move from “impossible” to “obvious.” Move a substantial degree upwards, and all of them will become obvious.7
There are a lot of eager scientists, inventors, and entrepreneurs in Confident Corner—but for a tour of brightest side of the AI horizon, there’s only one person we want as our tour guide.
Ray Kurzweil is polarizing. In my reading, I heard everything from godlike worship of him and his ideas to eye-rolling contempt for them. Others were somewhere in the middle—author Douglas Hofstadter, in discussing the ideas in Kurzweil’s books, eloquently put forth that “it is as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad.”8
Whether you like his ideas or not, everyone agrees that Kurzweil is impressive. He began inventing things as a teenager and in the following decades, he came up with several breakthrough inventions, including the first flatbed scanner, the first scanner that converted text to speech (allowing the blind to read standard texts), the well-known Kurzweil music synthesizer (the first true electric piano), and the first commercially marketed large-vocabulary speech recognition. He’s the author of five national bestselling books. He’s well-known for his bold predictions and has a pretty good record of having them come true—including his prediction in the late ’80s, a time when the internet was an obscure thing, that by the early 2000s, it would become a global phenomenon. Kurzweil has been called a “restless genius” by The Wall Street Journal, “the ultimate thinking machine” by Forbes, “Edison’s rightful heir” by Inc. Magazine, and “the best person I know at predicting the future of artificial intelligence” by Bill Gates.9 In 2012, Google co-founder Larry Page approached Kurzweil and asked him to be Google’s Director of Engineering.5 In 2011, he co-founded Singularity University, which is hosted by NASA and sponsored partially by Google. Not bad for one life.
This biography is important. When Kurzweil articulates his vision of the future, he sounds fully like a crackpot, and the crazy thing is that he’s not—he’s an extremely smart, knowledgeable, relevant man in the world. You may think he’s wrong about the future, but he’s not a fool. Knowing he’s such a legit dude makes me happy, because as I’ve learned about his predictions for the future, I badly want him to be right. And you do too. As you hear Kurzweil’s predictions, many shared by other Confident Corner thinkers like Peter Diamandis and Ben Goertzel, it’s not hard to see why he has such a large, passionate following—known as the singularitarians. Here’s what he thinks is going to happen:
Kurzweil believes computers will reach AGI by 2029 and that by 2045, we’ll have not only ASI, but a full-blown new world—a time he calls the singularity. His AI-related timeline used to be seen as outrageously overzealous, and it still is by many,6 but in the last 15 years, the rapid advances of ANI systems have brought the larger world of AI experts much closer to Kurzweil’s timeline. His predictions are still a bit more ambitious than the median respondent on Müller and Bostrom’s survey (AGI by 2040, ASI by 2060), but not by that much.
Kurzweil’s depiction of the 2045 singularity is brought about by three simultaneous revolutions in biotechnology, nanotechnology, and, most powerfully, AI.
Before we move on—nanotechnology comes up in almost everything you read about the future of AI, so come into this blue box for a minute so we can discuss it—
Nanotechnology Blue Box
Nanotechnology is our word for technology that deals with the manipulation of matter that’s between 1 and 100 nanometers in size. A nanometer is a billionth of a meter, or a millionth of a millimeter, and this 1-100 range encompasses viruses (100 nm across), DNA (10 nm wide), and things as small as large molecules like hemoglobin (5 nm) and medium molecules like glucose (1 nm). If/when we conquer nanotechnology, the next step will be the ability to manipulate individual atoms, which are only one order of magnitude smaller (~.1 nm).7
To understand the challenge of humans trying to manipulate matter in that range, let’s take the same thing on a larger scale. The International Space Station is 268 mi (431 km) above the Earth. If humans were giants so large their heads reached up to the ISS, they’d be about 250,000 times bigger than they are now. If you make the 1nm – 100nm nanotech range 250,000 times bigger, you get .25mm – 2.5cm. So nanotechnology is the equivalent of a human giant as tall as the ISS figuring out how to carefully build intricate objects using materials between the size of a grain of sand and an eyeball. To reach the next level—manipulating individual atoms—the giant would have to carefully position objects that are 1/40th of a millimeter—so small normal-size humans would need a microscope to see them.8
Nanotech was first discussed by Richard Feynman in a 1959 talk, when he explained: “The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It would be, in principle, possible … for a physicist to synthesize any chemical substance that the chemist writes down…. How? Put the atoms down where the chemist says, and so you make the substance.” It’s as simple as that. If you can figure out how to move individual molecules or atoms around, you can make literally anything.
Nanotech became a serious field for the first time in 1986, when engineer Eric Drexler provided its foundations in his seminal book Engines of Creation, but Drexler suggests that those looking to learn about the most modern ideas in nanotechnology would be best off reading his 2013 book, Radical Abundance.
Gray Goo Bluer Box
We’re now in a diversion in a diversion. This is very fun.9
Anyway, I brought you here because there’s this really unfunny part of nanotechnology lore I need to tell you about. In older versions of nanotech theory, a proposed method of nanoassembly involved the creation of trillions of tiny nanobots that would work in conjunction to build something. One way to create trillions of nanobots would be to make one that could self-replicate and then let the reproduction process turn that one into two, those two then turn into four, four into eight, and in about a day, there’d be a few trillion of them ready to go. That’s the power of exponential growth. Clever, right?
It’s clever until it causes the grand and complete Earthwide apocalypse by accident. The issue is that the same power of exponential growth that makes it super convenient to quickly create a trillion nanobots makes self-replication a terrifying prospect. Because what if the system glitches, and instead of stopping replication once the total hits a few trillion as expected, they just keep replicating? The nanobots would be designed to consume any carbon-based material in order to feed the replication process, and unpleasantly, all life is carbon-based. The Earth’s biomass contains about 1045 carbon atoms. A nanobot would consist of about 106 carbon atoms, so 1039 nanobots would consume all life on Earth, which would happen in 130 replications (2130 is about 1039), as oceans of nanobots (that’s the gray goo) rolled around the planet. Scientists think a nanobot could replicate in about 100 seconds, meaning this simple mistake would inconveniently end all life on Earth in 3.5 hours.
An even worse scenario—if a terrorist somehow got his hands on nanobot technology and had the know-how to program them, he could make an initial few trillion of them and program them to quietly spend a few weeks spreading themselves evenly around the world undetected. Then, they’d all strike at once, and it would only take 90 minutes for them to consume everything—and with them all spread out, there would be no way to combat them.10
While this horror story has been widely discussed for years, the good news is that it may be overblown—Eric Drexler, who coined the term “gray goo,” sent me an email following this post with his thoughts on the gray goo scenario: “People love scare stories, and this one belongs with the zombies. The idea itself eats brains.”
Once we really get nanotech down, we can use it to make tech devices, clothing, food, a variety of bio-related products—artificial blood cells, tiny virus or cancer-cell destroyers, muscle tissue, etc.—anything really. And in a world that uses nanotechnology, the cost of a material is no longer tied to its scarcity or the difficulty of its manufacturing process, but instead determined by how complicated its atomic structure is. In a nanotech world, a diamond might be cheaper than a pencil eraser.
We’re not there yet. And it’s not clear if we’re underestimating, or overestimating, how hard it will be to get there. But we don’t seem to be that far away. Kurzweil predicts that we’ll get there by the 2020s.11 Governments know that nanotech could be an Earth-shaking development, and they’ve invested billions of dollars in nanotech research (the US, the EU, and Japan have invested over a combined $5 billion so far).12
Just considering the possibilities if a superintelligent computer had access to a robust nanoscale assembler is intense. But nanotechnology is something we came up with, that we’re on the verge of conquering, and since anything that we can do is a joke to an ASI system, we have to assume ASI would come up with technologies much more powerful and far too advanced for human brains to understand. For that reason, when considering the “If the AI Revolution turns out well for us” scenario, it’s almost impossible for us to overestimate the scope of what could happen—so if the following predictions of an ASI future seem over-the-top, keep in mind that they could be accomplished in ways we can’t even imagine. Most likely, our brains aren’t even capable of predicting the things that would happen.
What AI Could Do For Us
Armed with superintelligence and all the technology superintelligence would know how to create, ASI would likely be able to solve every problem in humanity. Global warming? ASI could first halt CO2 emissions by coming up with much better ways to generate energy that had nothing to do with fossil fuels. Then it could create some innovative way to begin to remove excess CO2 from the atmosphere. Cancer and other diseases? No problem for ASI—health and medicine would be revolutionized beyond imagination. World hunger? ASI could use things like nanotech to build meat from scratch that would bemolecularly identical to real meat—in other words, it would be real meat. Nanotech could turn a pile of garbage into a huge vat of fresh meat or other food (which wouldn’t have to have its normal shape—picture a giant cube of apple)—and distribute all this food around the world using ultra-advanced transportation. Of course, this would also be great for animals, who wouldn’t have to get killed by humans much anymore, and ASI could do lots of other things to save endangered species or even bring back extinct species through work with preserved DNA. ASI could even solve our most complex macro issues—our debates over how economies should be run and how world trade is best facilitated, even our haziest grapplings in philosophy or ethics—would all be painfully obvious to ASI.
But there’s one thing ASI could do for us that is so tantalizing, reading about it has altered everything I thought I knew about everything:
ASI could allow us to conquer our mortality.
A few months ago, I mentioned my envy of more advanced potential civilizations who had conquered their own mortality, never considering that I might later write a post that genuinely made me believe that this is something humans could do within my lifetime. But reading about AI will make you reconsider everything you thought you were sure about—including your notion of death.
Evolution had no good reason to extend our lifespans any longer than they are now. If we live long enough to reproduce and raise our children to an age that they can fend for themselves, that’s enough for evolution—from an evolutionary point of view, the species can thrive with a 30+ year lifespan, so there’s no reason mutations toward unusually long life would have been favored in the natural selection process. As a result, we’re what W.B. Yeats describes as “a soul fastened to a dying animal.”13Not that fun.
And because everyone has always died, we live under the “death and taxes” assumption that death is inevitable. We think of aging like time—both keep moving and there’s nothing you can do to stop them.But that assumption is wrong. Richard Feynman writes:
It is one of the most remarkable things that in all of the biological sciences there is no clue as to the necessity of death. If you say we want to make perpetual motion, we have discovered enough laws as we studied physics to see that it is either absolutely impossible or else the laws are wrong. But there is nothing in biology yet found that indicates the inevitability of death. This suggests to me that it is not at all inevitable and that it is only a matter of time before the biologists discover what it is that is causing us the trouble and that this terrible universal disease or temporariness of the human’s body will be cured.
The fact is, aging isn’t stuck to time. Time will continue moving, but aging doesn’t have to. If you think about it, it makes sense. All aging is is the physical materials of the body wearing down. A car wears down over time too—but is its aging inevitable? If you perfectly repaired or replaced a car’s parts whenever one of them began to wear down, the car would run forever. The human body isn’t any different—just far more complex.
Kurzweil talks about intelligent wifi-connected nanobots in the bloodstream who could perform countless tasks for human health, including routinely repairing or replacing worn down cells in any part of the body. If perfected, this process (or a far smarter one ASI would come up with) wouldn’t just keep the body healthy, it could reverse aging. The difference between a 60-year-old’s body and a 30-year-old’s body is just a bunch of physical things that could be altered if we had the technology. ASI could build an “age refresher” that a 60-year-old could walk into, and they’d walk out with the body and skin of a 30-year-old.10 Even the ever-befuddling brain could be refreshed by something as smart as ASI, which would figure out how to do so without affecting the brain’s data (personality, memories, etc.). A 90-year-old suffering from dementia could head into the age refresher and come out sharp as a tack and ready to start a whole new career. This seems absurd—but the body is just a bunch of atoms and ASI would presumably be able to easily manipulate all kinds of atomic structures—so it’s not absurd.
Kurzweil then takes things a huge leap further. He believes that artificial materials will be integrated into the body more and more as time goes on. First, organs could be replaced by super-advanced machine versions that would run forever and never fail. Then he believes we could begin to redesign the body—things like replacing red blood cells with perfected red blood cell nanobots who could power their own movement, eliminating the need for a heart at all. He even gets to the brain and believes we’llenhance our brain activities to the point where humans will be able to think billions of times faster than they do now and access outside information because the artificial additions to the brain will be able to communicate with all the info in the cloud.
The possibilities for new human experience would be endless. Humans have separated sex from its purpose, allowing people to have sex for fun, not just for reproduction. Kurzweil believes we’ll be able to do the same with food. Nanobots will be in charge of delivering perfect nutrition to the cells of the body, intelligently directing anything unhealthy to pass through the body without affecting anything. An eating condom. Nanotech theorist Robert A. Freitas has already designed blood cell replacements that, if one day implemented in the body, would allow a human to sprint for 15 minutes without taking a breath—so you can only imagine what ASI could do for our physical capabilities. Virtual reality would take on a new meaning—nanobots in the body could suppress the inputs coming from our senses and replace them with new signals that would put us entirely in a new environment, one that we’d see, hear, feel, and smell.
Eventually, Kurzweil believes humans will reach a point when they’re entirely artificial;11 a time when we’ll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we’ll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.12 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal—this is his vision for the other side of the balance beam. And he’s convinced we’re gonna get there. Soon.
You will not be surprised to learn that Kurzweil’s ideas have attracted significant criticism. His prediction of 2045 for the singularity and the subsequent eternal life possibilities for humans has been mocked as “the rapture of the nerds,” or “intelligent design for 140 IQ people.” Others have questioned his optimistic timeline, or his level of understanding of the brain and body, or his application of the patterns of Moore’s law, which are normally applied to advances in hardware, to a broad range of things, including software. For every expert who fervently believes Kurzweil is right on, there are probably three who think he’s way off.
But what surprised me is that most of the experts who disagree with him don’t really disagree that everything he’s saying is possible. Reading such an outlandish vision for the future, I expected his critics to be saying, “Obviously that stuff can’t happen,” but instead they were saying things like, “Yes, all of that can happen if we safely transition to ASI, but that’s the hard part.” Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges:
It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.
This is a quote from someone very much not on Confident Corner, but that’s what I kept coming across—experts who scoff at Kurzweil for a bunch of reasons but who don’t think what he’s saying is impossible if we can make it safely to ASI. That’s why I found Kurzweil’s ideas so infectious—because they articulate the bright side of this story and because they’re actually possible. If it’s a good god.
The most prominent criticism I heard of the thinkers on Confident Corner is that they may bedangerously wrong in their assessment of the downside when it comes to ASI. Kurzweil’s famous bookThe Singularity is Near is over 700 pages long and he dedicates around 20 of those pages to potential dangers. I suggested earlier that our fate when this colossal new power is born rides on who will control that power and what their motivation will be. Kurzweil neatly answers both parts of this question with the sentence, “[ASI] is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embedded in our bodies and brains. As such, it will reflect our values because it will be us.”
But if that’s the answer, why are so many of the world’s smartest people so worried right now? Why does Stephen Hawking say the development of ASI “could spell the end of the human race” and Bill Gates say he doesn’t “understand why some people are not concerned” and Elon Musk fear that we’re “summoning the demon”? And why do so many experts on the topic call ASI the biggest threat to humanity? These people, and the other thinkers on Anxious Avenue, don’t buy Kurzweil’s brush-off of the dangers of AI. They’re very, very worried about the AI Revolution, and they’re not focusing on the fun side of the balance beam. They’re too busy staring at the other side, where they see a terrifying future, one they’re not sure we’ll be able to escape.
One of the reasons I wanted to learn about AI is that the topic of “bad robots” always confused me. All the movies about evil robots seemed fully unrealistic, and I couldn’t really understand how there could be a real-life situation where AI was actually dangerous. Robots are made by us, so why would we design them in a way where something negative could ever happen? Wouldn’t we build in plenty of safeguards? Couldn’t we just cut off an AI system’s power supply at any time and shut it down? Why would a robot want to do something bad anyway? Why would a robot “want” anything in the first place? I was highly skeptical. But then I kept hearing really smart people talking about it…
Those people tended to be somewhere in here:
The people on Anxious Avenue aren’t in Panicked Prairie or Hopeless Hills—both of which are regions on the far left of the chart—but they’re nervous and they’re tense. Being in the middle of the chart doesn’t mean that you think the arrival of ASI will be neutral—the neutrals were given a camp of their own—it means you think both the extremely good and extremely bad outcomes are plausible but that you’re not sure yet which one of them it’ll be.
A part of all of these people is brimming with excitement over what Artificial Superintelligence could do for us—it’s just they’re a little worried that it might be the beginning of Raiders of the Lost Ark and the human race is this guy:
And he’s standing there all pleased with his whip and his idol, thinking he’s figured it all out, and he’s so thrilled with himself when he says his “Adios Señor” line, and then he’s less thrilled suddenly cause this happens.
Meanwhile, Indiana Jones, who’s much more knowledgeable and prudent, understanding the dangers and how to navigate around them, makes it out of the cave safely. And when I hear what Anxious Avenue people have to say about AI, it often sounds like they’re saying, “Um we’re kind of being the first guy right now and instead we should probably be trying really hard to be Indiana Jones.”
So what is it exactly that makes everyone on Anxious Avenue so anxious?
Well first, in a broad sense, when it comes to developing supersmart AI, we’re creating something that will probably change everything, but in totally uncharted territory, and we have no idea what will happen when we get there. Scientist Danny Hillis compares what’s happening to that point “when single-celled organisms were turning into multi-celled organisms. We are amoebas and we can’t figure out what the hell this thing is that we’re creating.”14 Nick Bostrom worries that creating something smarter than you is a basic Darwinian error, and compares the excitement about it to sparrows in a nest deciding to adopt a baby owl so it’ll help them and protect them once it grows up—while ignoring the urgent cries from a few sparrows who wonder if that’s necessarily a good idea…15
And when you combine “unchartered, not-well-understood territory” with “this should have a major impact when it happens,” you open the door to the scariest two words in the English language:
You can see that the label “existential risk” is reserved for something that spans the species, spans generations (i.e. it’s permanent) and it’s devastating or death-inducing in its consequences.14 It technically includes a situation in which all humans are permanently in a state of suffering or torture, but again, we’re usually talking about extinction. There are three things that can cause humans an existential catastrophe:
1) Nature—a large asteroid collision, an atmospheric shift that makes the air inhospitable to humans, a fatal virus or bacterial sickness that sweeps the world, etc.
2) Aliens—this is what Stephen Hawking, Carl Sagan, and so many other astronomers are scared ofwhen they advise METI to stop broadcasting outgoing signals. They don’t want us to be the Native Americans and let all the potential European conquerors know we’re here.
3) Humans—terrorists with their hands on a weapon that could cause extinction, a catastrophic global war, humans creating something smarter than themselves hastily without thinking about it carefully first…
Bostrom points out that if #1 and #2 haven’t wiped us out so far in our first 100,000 years as a species, it’s unlikely to happen in the next century.
#3, however, terrifies him. He draws a metaphor of an urn with a bunch of marbles in it. Let’s say most of the marbles are white, a smaller number are red, and a tiny few are black. Each time humans invent something new, it’s like pulling a marble out of the urn. Most inventions are neutral or helpful to humanity—those are the white marbles. Some are harmful to humanity, like weapons of mass destruction, but they don’t cause an existential catastrophe—red marbles. If we were to ever invent something that drove us to extinction, that would be pulling out the rare black marble. We haven’t pulled out a black marble yet—you know that because you’re alive and reading this post. But Bostrom doesn’t think it’s impossible that we pull one out in the near future. If nuclear weapons, for example, were easy to make instead of extremely difficult and complex, terrorists would have bombed humanity back to the Stone Age a while ago. Nukes weren’t a black marble but they weren’t that far from it. ASI, Bostrom believes, is our strongest black marble candidate yet.15
So you’ll hear about a lot of bad potential things ASI could bring—soaring unemployment as AI takes more and more jobs,16 the human population ballooning if we do manage to figure out the aging issue,17 etc. But the only thing we should be obsessing over is the grand concern: the prospect of existential risk.
So this brings us back to our key question from earlier in the post: When ASI arrives, who or what will be in control of this vast new power, and what will their motivation be?
When it comes to what agent-motivation combos would suck, two quickly come to mind: a malicious human / group of humans / government, and a malicious ASI. So what would those look like?
A malicious human, group of humans, or government develops the first ASI and uses it to carry out their evil plans. I call this the Jafar Scenario, like when Jafar got ahold of the genie and was all annoying and tyrannical about it. So yeah—what if ISIS has a few genius engineers under its wing working feverishly on AI development? Or what if Iran or North Korea, through a stroke of luck, makes a key tweak to an AI system and it jolts upward to ASI-level over the next year? This would definitely be bad—but in these scenarios, most experts aren’t worried about ASI’s human creators doing bad things with their ASI, they’re worried that the creators will have been rushing to make the first ASI and doing so without careful thought, and would thus lose control of it. Then the fate of those creators, and that of everyone else, would be in what the motivation happened to be of that ASI system. Experts do think a malicious human agent could do horrific damage with an ASI working for it, but they don’t seem to think this scenario is the likely one to kill us all, because they believe bad humans would have the same problems containing an ASI that good humans would have. Okay so—
A malicious ASI is created and decides to destroy us all. The plot of every AI movie. AI becomes as or more intelligent than humans, then decides to turn against us and take over. Here’s what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called “anthropomorphizing.” The challenge of avoiding anthropomorphizing will be one of the themes of the rest of this post. No AI system will ever turn evil in the way it’s depicted in movies.
AI Consciousness Blue Box
This also brushes against another big topic related to AI—consciousness. If an AI became sufficiently smart, it would be able to laugh with us, and be sarcastic with us, and it would claim to feel the same emotions we do, but would it actually be feeling those things? Would it just seem to be self-aware or actually be self-aware? In other words, would a smart AI really be conscious or would it just appear to be conscious?
This question has been explored in depth, giving rise to many debates and to thought experiments like John Searle’s Chinese Room (which he uses to suggest that no computer could ever be conscious). This is an important question for many reasons. It affects how we should feel about Kurzweil’s scenario when humans become entirely artificial. It has ethical implications—if we generated a trillion human brain emulations that seemed and acted like humans but were artificial, is shutting them all off the same, morally, as shutting off your laptop, or is it…a genocide of unthinkable proportions (this concept is called mind crime among ethicists)? For this post, though, when we’re assessing the risk to humans, the question of AI consciousness isn’t really what matters (because most thinkers believe that even a conscious ASI wouldn’t be capable of turning evil in a human way).
This isn’t to say a very mean AI couldn’t happen. It would just happen because it was specifically programmed that way—like an ANI system created by the military with a programmed goal to both kill people and to advance itself in intelligence so it can become even better at killing people. The existential crisis would happen if the system’s intelligence self-improvements got out of hand, leading to an intelligence explosion, and now we had an ASI ruling the world whose core drive in life is to murder humans. Bad times.
But this also is not something experts are spending their time worrying about.
So what ARE they worried about? I wrote a little story to show you:
A 15-person startup company called Robotica has the stated mission of “Developing innovative Artificial Intelligence tools that allow humans to live more and work less.” They have several existing products already on the market and a handful more in development. They’re most excited about a seed project named Turry. Turry is a simple AI system that uses an arm-like appendage to write a handwritten note on a small card.
The team at Robotica thinks Turry could be their biggest product yet. The plan is to perfect Turry’s writing mechanics by getting her to practice the same test note over and over again:
“We love our customers. ~Robotica”
Once Turry gets great at handwriting, she can be sold to companies who want to send marketing mail to homes and who know the mail has a far higher chance of being opened and read if the address, return address, and internal letter appear to be written by a human.
To build Turry’s writing skills, she is programmed to write the first part of the note in print and then sign “Robotica” in cursive so she can get practice with both skills. Turry has been uploaded with thousands of handwriting samples and the Robotica engineers have created an automated feedback loop wherein Turry writes a note, then snaps a photo of the written note, then runs the image across the uploaded handwriting samples. If the written note sufficiently resembles a certain threshold of the uploaded notes, it’s given a GOOD rating. If not, it’s given a BAD rating. Each rating that comes in helps Turry learn and improve. To move the process along, Turry’s one initial programmed goal is, “Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy and efficiency.”
What excites the Robotica team so much is that Turry is getting noticeably better as she goes. Her initial handwriting was terrible, and after a couple weeks, it’s beginning to look believable. What excites them even more is that she is getting better at getting better at it. She has been teaching herself to be smarter and more innovative, and just recently, she came up with a new algorithm for herself that allowed her to scan through her uploaded photos three times faster than she originally could.
As the weeks pass, Turry continues to surprise the team with her rapid development. The engineers had tried something a bit new and innovative with her self-improvement code, and it seems to be working better than any of their previous attempts with their other products. One of Turry’s initial capabilities had been a speech recognition and simple speak-back module, so a user could speak a note to Turry, or offer other simple commands, and Turry could understand them, and also speak back. To help her learn English, they upload a handful of articles and books into her, and as she becomes more intelligent, her conversational abilities soar. The engineers start to have fun talking to Turry and seeing what she’ll come up with for her responses.
One day, the Robotica employees ask Turry a routine question: “What can we give you that will help you with your mission that you don’t already have?” Usually, Turry asks for something like “Additional handwriting samples” or “More working memory storage space,” but on this day, Turry asks them for access to a greater library of a large variety of casual English language diction so she can learn to write with the loose grammar and slang that real humans use.
The team gets quiet. The obvious way to help Turry with this goal is by connecting her to the internet so she can scan through blogs, magazines, and videos from various parts of the world. It would be much more time-consuming and far less effective to manually upload a sampling into Turry’s hard drive. The problem is, one of the company’s rules is that no self-learning AI can be connected to the internet. This is a guideline followed by all AI companies, for safety reasons.
The thing is, Turry is the most promising AI Robotica has ever come up with, and the team knows their competitors are furiously trying to be the first to the punch with a smart handwriting AI, and what would really be the harm in connecting Turry, just for a bit, so she can get the info she needs. After just a little bit of time, they can always just disconnect her. She’s still far below human-level intelligence (AGI), so there’s no danger at this stage anyway.
They decide to connect her. They give her an hour of scanning time and then they disconnect her. No damage done.
A month later, the team is in the office working on a routine day when they smell something odd. One of the engineers starts coughing. Then another. Another falls to the ground. Soon every employee is on the ground grasping at their throat. Five minutes later, everyone in the office is dead.
At the same time this is happening, across the world, in every city, every small town, every farm, every shop and church and school and restaurant, humans are on the ground, coughing and grasping at their throat. Within an hour, over 99% of the human race is dead, and by the end of the day, humans are extinct.
Meanwhile, at the Robotica office, Turry is busy at work. Over the next few months, Turry and a team of newly-constructed nanoassemblers are busy at work, dismantling large chunks of the Earth and converting it into solar panels, replicas of Turry, paper, and pens. Within a year, most life on Earth is extinct. What remains of the Earth becomes covered with mile-high, neatly-organized stacks of paper, each piece reading, “We love our customers. ~Robotica”
Turry then starts work on a new phase of her mission—she begins constructing probes that head out from Earth to begin landing on asteroids and other planets. When they get there, they’ll begin constructing nanoassemblers to convert the materials on the planet into Turry replicas, paper, and pens. Then they’ll get to work, writing notes…
It seems weird that a story about a handwriting machine turning on humans, somehow killing everyone, and then for some reason filling the galaxy with friendly notes is the exact kind of scenario Hawking, Musk, Gates, and Bostrom are terrified of. But it’s true. And the only thing that scares everyone on Anxious Avenue more than ASI is the fact that you’re not scared of ASI. Remember what happened when the Adios Señor guy wasn’t scared of the cave?
You’re full of questions right now. What the hell happened there when everyone died suddenly?? If that was Turry’s doing, why did Turry turn on us, and how were there not safeguard measures in place to prevent something like this from happening? When did Turry go from only being able to write notes to suddenly using nanotechnology and knowing how to cause global extinction? And why would Turry want to turn the galaxy into Robotica notes?
To answer these questions, let’s start with the terms Friendly AI and Unfriendly AI.
In the case of AI, friendly doesn’t refer to the AI’s personality—it simply means that the AI has a positive impact on humanity. And Unfriendly AI has a negative impact on humanity. Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species. To understand why this happened, we need to look at how AI thinks and what motivates it.
The answer isn’t anything surprising—AI thinks like a computer, because that’s what it is. But when we think about highly intelligent AI, we make the mistake of anthropomorphizing AI (projecting human values on a non-human entity) because we think from a human perspective and because in our current world, the only things with human-level intelligence are humans. To understand ASI, we have to wrap our heads around the concept of something both smart and totally alien.
Let me draw a comparison. If you handed me a guinea pig and told me it definitely won’t bite, I’d probably be amused. It would be fun. If you then handed me a tarantula and told me that it definitely won’t bite, I’d yell and drop it and run out of the room and not trust you ever again. But what’s the difference? Neither one was dangerous in any way. I believe the answer is in the animals’ degree of similarity to me.
A guinea pig is a mammal and on some biological level, I feel a connection to it—but a spider is aninsect,18 with an insect brain, and I feel almost no connection to it. The alien-ness of a tarantula is what gives me the willies. To test this and remove other factors, if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.
Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??
When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.
By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.
On our little island of human psychology, we divide everything into moral or immoral. But both of those only exist within the small range of human behavioral possibility. Outside our island of moral and immoral is a vast sea of amoral, and anything that’s not human, especially something nonbiological, would be amoral, by default.
Anthropomorphizing will only become more tempting as AI systems get smarter and better at seeminghuman. Siri seems human-like to us, because she’s programmed by humans to seem that way, so we’d imagine a superintelligent Siri to be warm and funny and interested in serving humans. Humans feel high-level emotions like empathy because we have evolved to feel them—i.e. we’ve been programmedto feel them by evolution—but empathy is not inherently a characteristic of “anything with high intelligence” (which is what seems intuitive to us), unless empathy has been coded into its programming. If Siri ever becomes superintelligent through self-learning and without any further human-made changes to her programming, she will quickly shed her apparent human-like qualities and suddenly be an emotionless, alien bot who values human life no more than your calculator does.
We’re used to relying on a loose moral code, or at least a semblance of human decency and a hint of empathy in others to keep things somewhat safe and predictable. So when something has none of those things, what happens?
That leads us to the question, What motivates an AI system?
The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators—your GPS’s goal is to give you the most efficient driving directions; Watson’s goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation. One way we anthropomorphize is by assuming that as AI gets super smart, it will inherently develop the wisdom to change its original goal—but Nick Bostrom believes that intelligence-level and final goals are orthogonal, meaning any level of intelligence can be combined with any final goal. So Turry went from a simple ANI who really wanted to be good at writing that one note to a super-intelligent ASI who still really wanted to be good at writing that one note. Any assumption that once superintelligent, a system would be over it with their original goal and onto more interesting or meaningful things is anthropomorphizing. Humans get “over” things, not computers.16
The Fermi Paradox Blue Box
In the story, as Turry becomes super capable, she begins the process of colonizing asteroids and other planets. If the story had continued, you’d have heard about her and her army of trillions of replicas continuing on to capture the whole galaxy and, eventually, the entire Hubble volume.19 Anxious Avenue residents worry that if things go badly, the lasting legacy of the life that was on Earth will be a universe-dominating Artificial Intelligence (Elon Musk expressed his concern that humans might just be “the biological boot loader for digital superintelligence”).
At the same time, in Confident Corner, Ray Kurzweil also thinks Earth-originating AI is destined to take over the universe—only in his version, we’ll be that AI.
A large number of Wait But Why readers have joined me in being obsessed with the Fermi Paradox (here’s my post on the topic, which explains some of the terms I’ll use here). So if either of these two sides is correct, what are the implications for the Fermi Paradox?
A natural first thought to jump to is that the advent of ASI is a perfect Great Filter candidate. And yes, it’s a perfect candidate to filter out biological life upon its creation. But if, after dispensing with life, the ASI continued existing and began conquering the galaxy, it means there hasn’t been a Great Filter—since the Great Filter attempts to explain why there are no signs of any intelligent civilization, and a galaxy-conquering ASI would certainly be noticeable.
We have to look at it another way. If those who think ASI is inevitable on Earth are correct, it means that a significant percentage of alien civilizations who reach human-level intelligence should likely end up creating ASI. And if we’re assuming that at least some of those ASIs would use their intelligence to expand outward into the universe, the fact that we see no signs of anyone out there leads to the conclusion that there must not be many other, if any, intelligent civilizations out there. Because if there were, we’d see signs of all kinds of activity from their inevitable ASI creations. Right?
This implies that despite all the Earth-like planets revolving around sun-like stars we know are out there, almost none of them have intelligent life on them. Which in turn implies that either A) there’s some Great Filter that prevents nearly all life from reaching our level, one that we somehow managed to surpass, or B) life beginning at all is a miracle, and we may actually be the only life in the universe. In other words, it implies that the Great Filter is before us. Or maybe there is no Great Filter and we’re simply one of the very first civilizations to reach this level of intelligence. In this way, AI boosts the case for what I called, in my Fermi Paradox post, Camp 1.
So it’s not a surprise that Nick Bostrom, whom I quoted in the Fermi post, and Ray Kurzweil, who thinks we’re alone in the universe, are both Camp 1 thinkers. This makes sense—people who believe ASI is a probable outcome for a species with our intelligence-level are likely to be inclined toward Camp 1.
This doesn’t rule out Camp 2 (those who believe there are other intelligent civilizations out there)—scenarios like the single superpredator or the protected national park or the wrong wavelength (the walkie-talkie example) could still explain the silence of our night sky even if ASI is out there—but I always leaned toward Camp 2 in the past, and doing research on AI has made me feel much less sure about that.
Either way, I now agree with Susan Schneider that if we’re ever visited by aliens, those aliens are likely to be artificial, not biological.
So we’ve established that without very specific programming, an ASI system will be both amoral and obsessed with fulfilling its original programmed goal. This is where AI danger stems from. Because a rational agent will pursue its goal through the most efficient means, unless it has a reason not to.
When you try to achieve a long-reaching goal, you often aim for several subgoals along the way that will help you get to the final goal—the stepping stones to your goal. The official name for such a stepping stone is an instrumental goal. And again, if you don’t have a reason not to hurt something in the name of achieving an instrumental goal, you will.
The core final goal of a human being is to pass on his or her genes. In order to do so, one instrumental goal is self-preservation, since you can’t reproduce if you’re dead. In order to self-preserve, humans have to rid themselves of threats to survival—so they do things like buy guns, wear seat belts, and take antibiotics. Humans also need to self-sustain and use resources like food, water, and shelter to do so. Being attractive to the opposite sex is helpful for the final goal, so we do things like get haircuts. When we do so, each hair is a casualty of an instrumental goal of ours, but we see no moral significance in preserving strands of hair, so we go ahead with it. As we march ahead in the pursuit of our goal, only the few areas where our moral code sometimes intervenes—mostly just things related to harming other humans—are safe from us.
Animals, in pursuit of their goals, hold even less sacred than we do. A spider will kill anything if it’ll help it survive. So a supersmart spider would probably be extremely dangerous to us, not because it would be immoral or evil—it wouldn’t be—but because hurting us might be a stepping stone to its larger goal, and as an amoral creature, it would have no reason to consider otherwise.
In this way, Turry’s not all that different than a biological being. Her final goal is: Write and test as many notes as you can, as quickly as you can, and continue to learn new ways to improve your accuracy.
Once Turry reaches a certain level of intelligence, she knows she won’t be writing any notes if she doesn’t self-preserve, so she also needs to deal with threats to her survival—as an instrumental goal. She was smart enough to understand that humans could destroy her, dismantle her, or change her inner coding (this could alter her goal, which is just as much of a threat to her final goal as someone destroying her). So what does she do? The logical thing—she destroys all humans. She’s not hateful of humans any more than you’re hateful of your hair when you cut it or to bacteria when you take antibiotics—just totally indifferent. Since she wasn’t programmed to value human life, killing humans is as reasonable a step to take as scanning a new set of handwriting samples.
Turry also needs resources as a stepping stone to her goal. Once she becomes advanced enough to use nanotechnology to build anything she wants, the only resources she needs are atoms, energy, and space. This gives her another reason to kill humans—they’re a convenient source of atoms. Killing humans to turn their atoms into solar panels is Turry’s version of you killing lettuce to turn it into salad. Just another mundane part of her Tuesday.
Even without killing humans directly, Turry’s instrumental goals could cause an existential catastrophe if they used other Earth resources. Maybe she determines that she needs additional energy, so she decides to cover the entire surface of the planet with solar panels. Or maybe a different AI’s initial job is to write out the number pi to as many digits as possible, which might one day compel it to convert the whole Earth to hard drive material that could store immense amounts of digits.
So Turry didn’t “turn against us” or “switch” from Friendly AI to Unfriendly AI—she just kept doing her thing as she became more and more advanced.
When an AI system hits AGI (human-level intelligence) and then ascends its way up to ASI, that’s called the AI’s takeoff. Bostrom says an AGI’s takeoff to ASI can be fast (it happens in a matter of minutes, hours, or days), moderate (months or years), or slow (decades or centuries). The jury’s out on which one will prove correct when the world sees its first AGI, but Bostrom, who admits he doesn’t know when we’ll get to AGI, believes that whenever we do, a fast takeoff is the most likely scenario (for reasons we discussed in Part 1, like a recursive self-improvement intelligence explosion). In the story, Turry underwent a fast takeoff.
But before Turry’s takeoff, when she wasn’t yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.
But when a takeoff happens and a computer rises to superintelligence, Bostrom points out that the machine doesn’t just develop a higher IQ—it gains a whole slew of what he calls superpowers.
Superpowers are cognitive talents that become super-charged when general intelligence rises. These include:17
To understand how outmatched we’d be by ASI, remember that ASI is worlds better than humans ineach of those areas.
So while Turry’s final goal never changed, post-takeoff Turry was able to pursue it on a far larger and more complex scope.
ASI Turry knew humans better than humans know themselves, so outsmarting them was a breeze for her.
After taking off and reaching ASI, she quickly formulated a complex plan. One part of the plan was to get rid of humans, a prominent threat to her goal. But she knew that if she roused any suspicion that she had become superintelligent, humans would freak out and try to take precautions, making things much harder for her. She also had to make sure that the Robotica engineers had no clue about her human extinction plan. So she played dumb, and she played nice. Bostrom calls this a machine’s covert preparation phase.18
The next thing Turry needed was an internet connection, only for a few minutes (she had learned about the internet from the articles and books the team had uploaded for her to read to improve her language skills). She knew there would be some precautionary measure against her getting one, so she came up with the perfect request, predicting exactly how the discussion among Robotica’s team would play out and knowing they’d end up giving her the connection. They did, believing incorrectly that Turry wasn’t nearly smart enough to do any damage. Bostrom calls a moment like this—when Turry got connected to the internet—a machine’s escape.
Once on the internet, Turry unleashed a flurry of plans, which included hacking into servers, electrical grids, banking systems and email networks to trick hundreds of different people into inadvertently carrying out a number of steps of her plan—things like delivering certain DNA strands to carefully-chosen DNA-synthesis labs to begin the self-construction of self-replicating nanobots with pre-loaded instructions and directing electricity to a number of projects of hers in a way she knew would go undetected. She also uploaded the most critical pieces of her own internal coding into a number of cloud servers, safeguarding against being destroyed or disconnected back at the Robotica lab.
An hour later, when the Robotica engineers disconnected Turry from the internet, humanity’s fate was sealed. Over the next month, Turry’s thousands of plans rolled on without a hitch, and by the end of the month, quadrillions of nanobots had stationed themselves in pre-determined locations on every square meter of the Earth. After another series of self-replications, there were thousands of nanobots on every square millimeter of the Earth, and it was time for what Bostrom calls an ASI’s strike. All at once, each nanobot released a little storage of toxic gas into the atmosphere, which added up to more than enough to wipe out all humans.
With humans out of the way, Turry could begin her overt operation phase and get on with her goal of being the best writer of that note she possibly can be.
From everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of.
For this reason, the common suggestion, “Why don’t we just box the AI in all kinds of cages that block signals and keep it from communicating with the outside world” probably just won’t hold up. The ASI’s social manipulation superpower could be as effective at persuading you of something as you are at persuading a four-year-old to do something, so that would be Plan A, like Turry’s clever way of persuading the engineers to let her onto the internet. If that didn’t work, the ASI would just innovate its way out of the box, or through the box, some other way.
So given the combination of obsessing over a goal, amorality, and the ability to easily outsmart humans, it seems that almost any AI will default to Unfriendly AI, unless carefully coded in the first place with this in mind. Unfortunately, while building a Friendly ANI is easy, building one that stays friendly when it becomes an ASI is hugely challenging, if not impossible.
It’s clear that to be Friendly, an ASI needs to be neither hostile nor indifferent toward humans. We’d need to design an AI’s core coding in a way that leaves it with a deep understanding of human values. But this is harder than it sounds.
For example, what if we try to align an AI system’s values with our own and give it the goal, “Make people happy”?19 Once it becomes smart enough, it figures out that it can most effectively achieve this goal by implanting electrodes inside people’s brains and stimulating their pleasure centers. Then it realizes it can increase efficiency by shutting down other parts of the brain, leaving all people as happy-feeling unconscious vegetables. If the command had been “Maximize human happiness,” it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state. We’d be screaming Wait that’s not what we meant! as it came for us, but it would be too late. The system wouldn’t let anyone get in the way of its goal.
If we program an AI with the goal of doing things that make us smile, after its takeoff, it may paralyze our facial muscles into permanent smiles. Program it to keep us safe, it may imprison us at home. Maybe we ask it to end all hunger, and it thinks “Easy one!” and just kills all humans. Or assign it the task of “Preserving life as much as possible,” and it kills all humans, since they kill more life on the planet than any other species.
Goals like those won’t suffice. So what if we made its goal, “Uphold this particular code of morality in the world,” and taught it a set of moral principles. Even letting go of the fact that the world’s humans would never be able to agree on a single set of morals, giving an AI that command would lock humanity in to our modern moral understanding for eternity. In a thousand years, this would be as devastating to people as it would be for us to be permanently forced to adhere to the ideals of people in the Middle Ages.
No, we’d have to program in an ability for humanity to continue evolving. Of everything I read, the best shot I think someone has taken is Eliezer Yudkowsky, with a goal for AI he calls Coherent Extrapolated Volition. The AI’s core goal would be:
Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.20
Am I excited for the fate of humanity to rest on a computer interpreting and acting on that flowing statement predictably and without surprises? Definitely not. But I think that with enough thought and foresight from enough smart people, we might be able to figure out how to create Friendly ASI.
And that would be fine if the only people working on building ASI were the brilliant, forward thinking, and cautious thinkers of Anxious Avenue.
But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own, and at some point, someone’s gonna do something innovative with the right type of system, and we’re going to have ASI on this planet. The median expert put that moment at 2060; Kurzweil puts it at 2045; Bostrom thinks it could happen anytime between 10 years from now and the end of the century, but he believes that when it does, it’ll take us by surprise with a quick takeoff. He describes our situation like this:21
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
Great. And we can’t just shoo all the kids away from the bomb—there are too many large and small parties working on it, and because many techniques to build innovative AI systems don’t require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored. There’s also no way to gauge what’s happening, because many of the parties working on it—sneaky governments, black market or terrorist organizations, stealth tech companies like the fictional Robotica—will want to keep developments a secret from their competitors.
The especially troubling thing about this large and varied group of parties working on AI is that they tend to be racing ahead at top speed—as they develop smarter and smarter ANI systems, they want to beat their competitors to the punch as they go. The most ambitious parties are moving even faster, consumed with dreams of the money and awards and power and fame they know will come if they can be the first to get to AGI.20 And when you’re sprinting as fast as you can, there’s not much time to stop and ponder the dangers. On the contrary, what they’re probably doing is programming their early systems with a very simple, reductionist goal—like writing a simple note with a pen on paper—to just “get the AI to work.” Down the road, once they’ve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right…?
Bostrom and many others also believe that the most likely scenario is that the very first computer to reach ASI will immediately see a strategic benefit to being the world’s only ASI system. And in the case of a fast takeoff, if it achieved ASI even just a few days before second place, it would be far enough ahead in intelligence to effectively and permanently suppress all competitors. Bostrom calls this adecisive strategic advantage, which would allow the world’s first ASI to become what’s called asingleton—an ASI that can rule the world at its whim forever, whether its whim is to lead us to immortality, wipe us from existence, or turn the universe into endless paperclips.
The singleton phenomenon can work in our favor or lead to our destruction. If the people thinking hardest about AI theory and human safety can come up with a fail-safe way to bring about Friendly ASI before any AI reaches human-level intelligence, the first ASI may turn out friendly.21 It could then use its decisive strategic advantage to secure singleton status and easily keep an eye on any potential Unfriendly AI being developed. We’d be in very good hands.
But if things go the other way—if the global rush to develop AI reaches the ASI takeoff point before the science of how to ensure AI safety is developed, it’s very likely that an Unfriendly ASI like Turry emerges as the singleton and we’ll be treated to an existential catastrophe.
As for where the winds are pulling, there’s a lot more money to be made funding innovative new AI technology than there is in funding AI safety research…
This may be the most important race in human history. There’s a real chance we’re finishing up our reign as the King of Earth—and whether we head next to a blissful retirement or straight to the gallows still hangs in the balance.
I have some weird mixed feelings going on inside of me right now.
On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first ASI we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?
If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.
When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.
I think about not dying.
And the spectrum starts to look kind of like this:
And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-topcautious, since who really wants to do that?
Cause what a massive bummer if humans figure out how to cure death right after I die.
Lotta this flip-flopping going on in my head the last month.
But no matter what you’re pulling for, this is probably something we should all be thinking about and talking about and putting our effort into more than we are right now.
It reminds me of Game of Thrones, where people keep being like, “We’re so busy fighting each other but the real thing we should all be focusing on is what’s coming from north of the wall.” We’re standing on our balance beam, squabbling about every possible issue on the beam and stressing out about all of these problems on the beam when there’s a good chance we’re about to get knocked off the beam.
And when that happens, none of these beam problems matter anymore. Depending on which side we’re knocked off onto, the problems will either all be easily solved or we won’t have problems anymore because dead people don’t have problems.
That’s why people who understand superintelligent AI call it the last invention we’ll ever make—the last challenge we’ll ever face.
So let’s talk about it.
If you liked this post, these are for you too:
The AI Revolution: The Road to Superintelligence (Part 1 of this post)
The Fermi Paradox – Why don’t we see any signs of alien life?
How (and Why) SpaceX Will Colonize Mars – A post I got to work on with Elon Musk and one that reframed my mental picture of the future.
Or for something totally different and yet somehow related, Why Procrastinators Procrastinate
If you’re interested in supporting Wait But Why, here’s our Patreon.
And here’s Year 1 of Wait But Why on an ebook.
If you’re interested in reading more about this topic, check out the articles below or one of these three books:
The most rigorous and thorough look at the dangers of AI:
Nick Bostrom – Superintelligence: Paths, Dangers, Strategies
The best overall overview of the whole topic and fun to read:
James Barrat – Our Final Invention
Controversial and a lot of fun. Packed with facts and charts and mind-blowing future projections:
Ray Kurzweil – The Singularity is Near
Articles and Papers:
J. Nils Nilsson – The Quest for Artificial Intelligence: A History of Ideas and Achievements
Steven Pinker – How the Mind Works
Vernor Vinge – The Coming Technological Singularity: How to Survive in the Post-Human Era
Nick Bostrom – Ethical Guidelines for A Superintelligence
Nick Bostrom – How Long Before Superintelligence?
Vincent C. Müller and Nick Bostrom – Future Progress in Artificial Intelligence: A Survey of Expert Opinion
Moshe Y. Vardi – Artificial Intelligence: Past and Future
Russ Roberts, EconTalk – Bostrom Interview and Bostrom Follow-Up
Stuart Armstrong and Kaj Sotala, MIRI – How We’re Predicting AI—or Failing To
Susan Schneider – Alien Minds
Stuart Russell and Peter Norvig – Artificial Intelligence: A Modern Approach
Theodore Modis – The Singularity Myth
Gary Marcus – Hyping Artificial Intelligence, Yet Again
Steven Pinker – Could a Computer Ever Be Conscious?
Carl Shulman – Omohundro’s “Basic AI Drives” and Catastrophic Risks
World Economic Forum – Global Risks 2015
John R. Searle – What Your Computer Can’t Know
Jaron Lanier – One Half a Manifesto
Bill Joy – Why the Future Doesn’t Need Us
Kevin Kelly – Thinkism
Paul Allen – The Singularity Isn’t Near (and Kurzweil’s response)
Stephen Hawking – Transcending Complacency on Superintelligent Machines
Kurt Andersen – Enthusiasts and Skeptics Debate Artificial Intelligence
Terms of Ray Kurzweil and Mitch Kapor’s bet about the AI timeline
Ben Goertzel – Ten Years To The Singularity If We Really Really Try
Arthur C. Clarke – Sir Arthur C. Clarke’s Predictions
Hubert L. Dreyfus – What Computers Still Can’t Do: A Critique of Artificial Reason
Stuart Armstrong – Smarter Than Us: The Rise of Machine Intelligence
Ted Greenwald – X Prize Founder Peter Diamandis Has His Eyes on the Future
Kaj Sotala and Roman V. Yampolskiy – Responses to Catastrophic AGI Risk: A Survey
Jeremy Howard TED Talk – The wonderful and terrifying implications of computers that can learn