Category Archives: Engineering

To Think or Not to Think. What is the question?

I’m currently reading a pair of books about artificial intelligence.  The Emperor’s New Mind was written by Roger Penrose in 1989, and is an exploration of how computers work, and why they will never be able to mimic the intelligence of the human mind.  How to Create a Mind is a book by Ray Kurzweil written in 2012 about how the human mind works, and how computers will be able to mimic those structures and reach a level of true artificial intelligence. I’m deliberately reading them semi-simultaneously to maximize the juxtaposition of time and technological advancement, which is stark.  But I don’t want this blog post to be a comparison of the two books or approaches.  Rather, I want to talk about the question that has been plaguing me since I started reading them.

What is intelligence?

More specifically, how do we identify intelligence in others?  If we’re thinking about what is artificial intelligence, what are our standards?  How do we know when we’ve made it there?

I am intelligent.  No one, and nothing can convince me of otherwise.  My logic for this pronouncement is trite and cliche.  I think, therefore I am.  I am intelligent because I can recognize my own intelligence.

So I know that I am intelligent.  But how do I know that you are intelligent?

Well, since you are reading these words, that provides a measure of intelligence.  But a computer can read this words as well, technically.  If you leave a comment, you demonstrate your ability to interact, to communicate based on these words.  To create something new, based on what I’m talking about.  You’ve probably noticed in your blog though, lots of spam comments that are computers doing exactly that.  But those computers won’t do it as well as you can.  I can recognize a spam comment versus one from one of you.  So that commenting ability shows your intelligence.  If we were in person, we could have a conversation about what I wrote about.  That conversation would demonstrate to me that you are intelligent.

In fact, conversation has long been one of the hallmarks of artificial intelligence, in something called a Turning Test.  The test is this.  In a blind experiment, can a human judge have a conversation with a computer and a real person, and identify which one is the computer reliably?  Essentially, can a computer mimic human conversation, and pass as a human?  And indeed, a computer named Eugene passed a limited Turing Test a few years ago(well, kinda sorta, ish, not really, but hey good effort).  Does that make Eugene intelligent?  Is mimicry enough of a requirement for intelligence?

I think not (as would most of the world).  Part of the problem that we face with artificial intelligence is that the most convenient way to test intelligence is by performing tasks.  How smart are rats?  Put them in a maze and find out.  How smart are monkeys?  See if we can teach them sign language.  But intelligence is not defined by task completion.  I am not intelligent because I can hold a conversation.  I can hold a conversation because I am intelligent.  I am not intelligent because I can write these words, and drive a car, and play chess, and write music.  I can do those things because I am intelligent.  A computer can do all of those tasks as well.  But can a computer do them because it is following a preset series of commands, or an algorithm created by a human to complete those tasks.  So in a world where more and more of the tasks that we might use to show our intelligence can be completed as well or better by a computer or machine, how do we identify intelligence?  How do we know when we truly have an artificial intelligence, and not just some combination of algorithms (which is of course assuming that we are more than simply a combination of algorithms, which is hardly a sure thing).

Perhaps the task that best shows our intelligence is the task of questioning.  My intelligence is not demonstrated by my ability to converse adequately with you.  My intelligence is demonstrated by my ability to question the world around me.  And out of those questions, manipulate the world around me.  We are intelligent because we can ask the question “What is intelligence” and then design experiments to try to figure it out.  In other words, we are able to not simply follow a series of commands to solve a problem.  We are able to identify the problem, and then create the series of commands to use to solve the problem.  Perhaps we’ll know when we have created artificial intelligence because the computer will start trying to determine whether are not we are intelligent.  Maybe God knew that He got mankind right when we started to question his existence?

Question for another day: Why are we thinking about intelligence in terms of human intelligence, based on human tasks?  Why wouldn’t artificial intelligence look completely different than human, with its own set of values and goals?

Only tangentially related, but just because I wanted to add a video: https://www.youtube.com/watch?v=9TRv0cXUVQw

Aesthetically Scientific

Back in January, NPR’s Science Friday did a story about Steve Erenberg, a man who collects old scientific equipment. To go along with the radio interview, in wonderfully multimodal experience, Luke Groskin, the story producer, created a video interview/tour of Steve’s store.

Something that sometimes gets lost when confronted by technology is the purely aesthetic value of that technology, and what those aesthetics are saying.  One of my favorite lines from the video with Steve is about the importannce of aesthetics.

Quack devices are “designed to be better looking than their purpose…The more important it looked, the better people thought it worked, and the more money the doctor would get.”

Aesthetics matter.  Are you going to be comfortable jumping into an MRI machine with rust?  How about one with teeth?  Probably not, right?  Part of the job of technology is to convince people to use it, and we use technology that looks, feels, sounds usable.  If no care were given to aesthetics, then how many inventions would come to mass market?  The aesthetics of our technology, ie not having MRI machines with teeth, play an important rhetorical role in establishing ethos for the equipment itself.  Such and such looks good, looks modern, looks sleek, and therefore must perform its function remarkably well.

Another connection that I really enjoyed was made by Luke Groskin, the producer.  He made the comment in a later interview with Ira Glass that:

“You can see the art movement at which they [the science pieces] were created – the art movements during which they were created in the actual pieces.  So you can see expressionism in these dentistry practice anatomical models…You can see Victorial clawed tables. You can see more modern tin and aluminum, very sleek, very very simplistic designs. “

In other words, the aesthetics of science and technology are a product of the culture in which  they exist.  Some of these devices in the video might look like torture devices to us now, because that’s what our cultural language has decreed – torture devices are old and antiquated, not a product of our modern society, and many of the slasher films that we watch are based on technology of the past.  They look terrifying to us because of our cultural climate, but we can’t every forget that our things only look the way that they do because of our cultural expectations.  We all only need to look at yearbook photos to see how quickly cultural expectations are.  So we need to perhaps be aware that Now is not forever, and that modern gives way to antiquated very quickly. What will our great great grandchildren think of the absurdly clunky and spacious laptop computer that I am typing this article on?  As Steve Erenberg Summarized quite nicely:

“That’s what science is.  We always think state of the art and we’re ahead of our time and it’ll never get anymore modern than that, but it’s always changing.”

Science is not static, but is perpetually changing, as is every other facet of our culture.  And that change is not a bad, or scary thing.  That change is called progress, and is the reason that the “quack devices” as Erenberg calls them are not still in use.  Also, let’s remember that “Quack devices” might not necessarily be correct – just because something didn’t work doesn’t mean that it wasn’t valuable.  After all, the contribution of failure to scientific progress is extremely important.  Science changes, culture changes, and hopefully, those changes are for the better.

When Failure Counts

On January 18, 2016, SpaceX attempted once more to land their Falcon 9 rocket on a barge in the ocean.  The resulting failure was pretty spectacular (there were no people aboard, and no one was injured):

Throughout most of the history of SpaceX, Elon Musk has been quite public about his company’s many successes, and occasional misfires/spectacular explosions.  Let’s remember what the SpaceX team is trying to do – land a rocket that has flown to space and delivered a payload, in an effort to make rockets reusable, and thus bring down the cost of spaceflight.  Before SpaceX, this was something that had never been done before.  And prior to this attempted landing, SpaceX has succeeded several times, with varying levels of both precision and difficulty in the attempt.

No matter what, however, within days, Elon Musk releases the video of the attempt.  In the case of the failed landing and explosion video, he accompanied it with a simple, brief technical explanation of a possible cause:

“Falcon lands on droneship, but the lockout collet doesn’t latch on one the four legs, causing it to tip over post landing. Root cause may have been ice buildup due to condensation from heavy fog at liftoff.”

No attempt to make excuses, or assure anyone that this will never happen again, or cowtow to shareholders in the company.  What SpaceX is attempting to do is really, really, really hard, and explosive results are an inevitable part of that.  Can you imagine GM releasing that kind of statement if one of their brand new concept cars didn’t start at a car show?  Of course not – there would be apologies and finger-pointing and people would be fired.  Think back to the Obama administration’s investments in renewable energy companies a few years ago.  The companies that we remember are the ones that went bankrupt after the Department of Energy loan – Solyndra, Fisker, and Abound.  Companies who received the assistance defaulted on $780 million dollars – which is only a 2.28% default rate.  That means that over 97% of the loaned money was not defaulted on.  In fact, a few years later, and the government is turning an overall profit from those loans.  But still, the thing that we remember about that program, and the thing that many people judge that program on, are the few companies that did fail, the small percentage that did default.  This is demonstrative of a failure-adverse culture that is becoming prevalent.  Many teachers I think would recognize this in their students – top students are looking to be told what to do, how to get an A, rather than exploring and experiments and risking the possibility of failure.

However, failure is a major part of success.  Progress cannot be made without risk, and with risk comes not the potential, but the reality of failure.  Greatness and progress come when failure is overcome, as Elon Musk and the SpaceX team are doing.  While I can’t speak for Mr. Musk’s motivations in posting videos of failure, I can see a potential attempt to shift our thinking.  By sharing his failures publicly and with no apology, Mr. Musk is embracing his failures, learning from them – and helping us to learn along with them.  I sincerely hope that he continues this trend with all of his companies, and that other companies who could push the limits take comfort, and perhaps a bit of courage, in witnessing the failures of others, so that they can equally embrace and learn from their own.  After all, that’s how we progress as a society.

And besides.  When Elon Musk fails, it’s usually accompanied by a big explosion.  Which always makes for a fun video.