AI? 01 — “What is AI?”, A brief introduction

Jes Parent
7 min readDec 22, 2016
October 2016 in NYC

What is Artificial Intelligence?

That will be the guiding question for this series of posts, albeit an inherently flawed and limited banner. Although it is the most easily accessible phrasing of the concept that I know of thus far, the first step on my journey and the foundation of my significant interest in “AI” has been realizing that, much like its myriad points of interface present and future, what AI really is will be much more about what it changes, what it influences, and what the landscape of the future is looking like.

Yes, Terminators. Yes, robots, self-driving cars, Siri & Cortana, sex robots, The Matrix, and all of that. Let’s get that out of the way now; those are culturally accessible things. But there is the economy, there are human and (nonhuman?) rights, there are the colonization of other planets at stake — and I think the deceptive irony about AI is assuming that it is within a box.

My personal foundation with seriously caring about AI was the NYU rockstar conference called Ethics of AI, put up by their bioethics center. I say rockstar now having a more full understanding of the panel and the people in the audience. Thomas Nagel was there, to ask a few questions. The list of speakers I had little idea of their background or merits, even though I sat next to Stuart Russell (and attempted the most awkward selfie possible with him in the background), and arrived late to the start of one session to brush by Mr and Mrs Tegmark (a thousand apologies). One of my friends got to talk with Stephen Wolfram during break. Nick Bostrom was the leadoff speaker and stayed for most of the conference. I’m not one for celebrity, but I can say that I had no idea what I was getting into when I saw this essentially free, extremely local AI Ethics conference available — I just jumped on the opportunity to take it. I found my way to New York City on a Megabus, and that was the beginning.

I’ve not yet come a long way, but two months later, I still very much wish I was in the crowd again, this time with more questions, with more proposals, and more a sense of magnitude. Not because I think AI is going to turn the world into paperclips, but rather because of the tremendous breadth of perspectives that were on display and given space and voice and respect at the conference. I’m sure it can be duplicated, and I hope it is — and I know there are many good people trying to take a holistic approach — but if I am to propose a voice for myself in the realm of AI right now, it is that we need more intentional conversation about the varying aspects of “AI” and its myriad implications. It is necessary but not sufficient to ask ‘what is consciousness?”; so too, to wonder about “when”.

waitbutwhy’s excellent crash course on AI and its implications

Artificial intelligence’s advent has the capacity to influence significantly so many aspects of our lives that it’s hard to talk about — to talk about in awys beyond computer vision, or self-driving car functioning, or the cautions of Elon Musk, Stephen Hawking, Bill Gates, and many other big names. It’s very hard — and even the slew of experts at the gem of a conference I was able to attend struggled. No, I don’t really like to say struggled, because I don’t think that is a nuanced view of what was happening; ultimately I see it as an indication of where we are now.

I reference a wonderful introduction on waitbutwhy.com — it gives you a solid overview of the problem’s details and scope in a way that I am only partially referencing here in this cursory prelude.

I’ll also reference The Unfinished Fable of the Sparrows, prominently set as the first thing you read in Nick Bostrom’s seminal Superintelligence. It encapsulates very much where we are, now:

I’m glad there are a band of sparrows talking about how to make the most of things. I’m glad, even if it was an unusual mix of heroic experts that didn’t agree on much at Ethics of AI, that we’re here now. I’m not glad out of a sense of fear that things are ended, but more out of a sense of respect.

My favorite line in the Terminator series is a relatively unimpressive line, uttered by the young John Conner in the second film in the series. When he and Good Arnold are mobilizing, and they go to some remote location to get supplies. The enter the secret crypt location and of course it is loaded with weapons, which Arnold will unleash on people shortly thereafter. But John says something to the effect of ‘That’s my mom, she’s always thinking ahead…’

While that’s quite a cool moment in the film to revel in the weaponry, it stuck with me in terms of how I view my life, and in terms of how I view what comes after me, my place in the grand scheme of things. I have no intentions of making this discussion about Transhumanism, if you were wondering, but rather, I want this journey to be about openly looking at all aspects of the AI ‘problem’, or perhaps ‘revolution’; yes, I haven’t even figured out how I want to refer to it as, so far.

I do know it will be something that fits well with the Albert Einstein quote:

We cannot solve the significant problems we face while being at the same level of thinking we were at when we created them

AI will and is affecting how we use big data, and big data is affecting how we interpret and even are able to see many things. Big Data and AI are like telescopes and other tools that let us “see” or interpret data far beyond our natural senses allow — like much of our modern technology is founded upon that which we cannot naturally see. The curiosity and Pandora’s Box implicaton of AI, and particularly the problem of General AI or Complete AI, is the advent of an entirely seperate form of consciousness. In one sense, the full advent of a real General AI or a Super-intelligent AI is essentially SETI — the search for extra-terrestrial life. It is all the makings of finding alien life somewhere else, and yet, this is something humanity itself is birthing.

Yet where are the lines about what we anthropomorphize? How much folly is in assuming it will have the same outlooks we have just because we have them — should such an entity or set of entities exist? Homo sapiens, at present anyway, have a difficult time dealing with non-fixed point consciousness in ourselves and our relationships with “Others”, even other humans. Interfacing with anything like a distributed consciousness is completely alien, beyond the pale comparisons of Siri or Cortana today. So too is the SETI/AI as other Life issue, which is, in reality, not going to happen tomorrow. Or, potentially, at all. But I present those as the far-end extremes of what could be.

Going forward, I intend to leave all the facets on the table to be looked at. I will look at the raw technical aspect, from software to hardware, DeepMind to Watson to everything else, yes. But this project isn’t only going to be about what IBM or Google are doing. I very much look forward to taking serious looks at efforts such as Berkeley’s Center for Human Compatible AI; the Open AI project, and of course the Future of Humanity Institute and Future of Life Institute. I do want to look at the Consciousness debate, I do want to look at the economics and the social issues, the legal issues.

I have two books on the docket and I’m torn between reading one first or reading them both — and I want to read them: finishing Bostrom’s Superintelligence, and, a book perhaps more interesting to me than the expansive projections of Bostrom’s philosophy, Artificial Superintelligence, by Roman V. Yampolskiy.

There are many talks, podcasts, writings, discussions to come. I have a slight desire to number them, at this time, even if they branch into videos or podcasts I create. This project is similar, maybe even adjacent to, but significantly different from “What Happened to Cybernetics?”- although the overlap will not be avoided.

But this is the start of the journey. Ultimate deliverables would be a very brief primer or set of primers on key “subfields”, or topically related areas pertaining to the advance and advent of artificial intelligence.

I will make the caveat that I am not an expert in any subfield of this arena no less the topic itself, and that I will likely find things that later correct or shift my viewpoints or concerns as I go — and I’m ok with that. For one, because that’s how you learn, and two, there’s a lot of figuring out to be done, and nobody seems to have it all figured out, at all. Even remotely, to be quite sober about it. I do not see myself as an optimist or a pessimist, but I am someone who is concerned — and my concern is much more about humanity and its misapplication or foolishness regarding power, more than the technology or “alien” AI entities themselves; I’m quite sure I will comment on those human concerns extensively going forward.

I invite you to follow me here on Medium or any other social media you can find me on, Twitter @JesParent in particular, and also my internet home page of j-p.tech.

Thanks for reading and please feel free to offer any leads to follow in the comments here or on social media.

  • Jesse

--

--

Jes Parent

Embodied & Diverse Intelligences: Development, Learning & Evolution across Biological, Cognitive, and Artificial realms.