Tag Archives: Featured

Messages in the Building Blocks of Life

Craig Venter 1, one of the leading names in genomics research, announced last week that he and his team have created a new single celled organism with what they think could be close to the minimum number of genes to sustain life and reproduce. They did this by “simply” (there was nothing simple about it) removing genes one at a time from a bacteria with an already small genome until they reached the bare minimum to function.   Gene manipulation is of course nothing new, but Venter and his team are reaching closer to something new.

To be clear, Venter and his team are not creating life – they are not growing anything from scratch.  Rather than are taking existing life, and over generations, deliberately manipulating it.  The application of this research, many, many, many years from now, would hopefully be that organisms like this one could be manipulated for manufacturing purposes.  Imagine a bacteria that could consume and metabolize CO2, and produce some kind of fuel like methane as a waste product.  Or vaccines.  Or any of a million other chemical compounds.  This becomes almost the biological equivalent of nanobots out of science fiction.  Imagine if these bacteria could be programmed, through their DNA, to both create the raw materials for a project, and to join those materials into a usable construction.  Bacteria that metabolizes a pollutant, and produces a high strength polymer, and then can collaboratively bind that polymer into usable structures and shapes 2.  By producing the smallest genome possible, Venter hopes to provide a base platform for others to build on in order to maximize the applications of this type of bacterial manufacturing.    Imagine the impacts on human spaceflight alone – instead of bringing raw materials, crews could bring blank bacteria (or algae, or any other simple, rapidly growing organism) with a database of genes to plug in based on the needs of the mission.

While the practical applications of the bacteria get my science fiction writer mind whirling, it’s something else that got my rhetorical gears turning.  One of the things that Venter and his team figured out how to do years ago is what’s called watermarking the genomes.  Basically, it’s signing the genome directly in the code structure of the genome itself.  In this case, they watermarked the DNA of this stripped-down bacteria with the name of the J. Craig Venter Institute.  Other genomic creations by Venter have been signed with the names of the important people on the project, inspirational quotes, and even a website address where people who manage to crack the code can let the Venter Institute know that they cracked the code.  Early versions of the DNA watermarks followed a very simple code: every codon (group of three base pairs, of which there are 20), represents a letter of the alphabet – with obviously six letters not represented.  Venter and his team have created a much more complicated code, which represents all of the letters, numbers, and punctuation for several languages.  

For the direct purposes of Venter’s needs, these watermarks serve several purposes.  First, they obviously sign and take credit for the genome.  But second, they also prove to Venter, and anyone vetting their results, that this is, in fact, synthetic DNA, and not just some kind of natural contaminant or naturally occurring DNA.

This represents and entirely new medium of communication, one with its own affordances and constraints.  For now, the medium is based on existing textual languages, most likely for simplicity sake.  But much as computer programming has developed its own languages (binary being the most basic), what other language forms could be constructed using base pair matchups?  Aside from the language itself, imagine the ability for confidentiality by using this type of communication.  Bacteria would be remarkably easy to conceal.  Then again, in order to eliminate the possibility of contamination of the message being delivered, the bacteria need to be carefully controlled, and kept alive.  Transportation takes time, and bacteria grow and change over time.  But why stop at bacteria?Could we encode messages into our own DNA?  Or our children’s?  How might the impact of genetic mutations interfere with these types of messages encoded into DNA?  And what about ownership after that mutation occurs?  Does the owning creator, the signatory of the DNA, still own the bacteria once it has mutated? Not to mention the sophisticated equipment it takes to both produce and decode these messages.  And the messages have to be produced and encoded into the genome in such a way that they don’t interfere with the rest of the functional DNA.  This is a complicated medium to work with, but already, people have started thinking about how it is to be used.  In addition to the name of his institute and the creators of a previous genome, Venter included some inspirational quotes into the watermarks.  Why?   Because “we were criticized for not trying to say something more profound than just signing the work.”  Already, people are recognizing that with a change of this magnitude, it’s not enough to simply take credit.  If you’re going to use a medium so difficult and expensive, you had better be saying something profound.  

What applications for DNA watermarks could you think of?  Are there ethical considerations that we should be thinking about?

Science as/versus Faith

As undoubtedly none of you know, I am fascinating by disconnect that seems to occur between science and religion.  We see this perhaps most clearly in the battle over evolution that inexplicably still exists, but there are many other points of contention out there.  Just think of people who refuse medical treatment for religious reasons.  Likely you could have guessed this, but when there are sides to be taken, I tend to come down quite firmly on the side of science.  In the evolution in schools debate, for example, I’m bemused that there is a debate at all.  Evolution is taught in science class, because it is science.  If you want to teach creation, teach it in religion classes, alongside all of the other creation myths from various religions.  Debate over.  But I’ve been thinking a lot more not about the facts of the debate, but about the reason for the debate at all.

I think it’s clear that science feels superior to religion because science is based on facts.  Science is observable.  Science is reproduceable.  Science is logical.  Science has resulted in tangible, noticeable improvements in our society (and some not so great things too, sure).  Most important though, science is based on the natural world.  Things you can see and feel and touch and do.

Religion, on the other hand, is based on faith.  Belief in whatever it is you believe in.  Belief that your beliefs are correct, and that others are wrong.  Unfortunately, there’s nothing tangible to point to in order to defend that faith.  There’s a religion’s text usually that can be pointed to as evidence for points that your making.  And many people try to reference their literary texts in their arguments related to science. But the problem with the Bible as evidence is that it requires the person you’re arguing with to share your faith.  I, as a non-Christian, consider any argument that references the Bible to be invalid, because to me, it is simply a book written by people.  I lack the faith required to accept the words of the Bible as proof of anything.

I can feel religious people bristling.  Don’t worry – your time is coming.

Up until now, I’ve maintained an attitude of aloof amusement to people who disagree with science based on their religious beliefs.  That level of faith seemed naive.

But the more I’ve been thinking about it, I realized that I am operating in my world based on a different kind of faith.  I have faith in science.

Now, I don’t mean that in the sense that “I believe in science, that it will come through for us.”  I mean that in the sort of blind, simply accepting sort of faith that accompanies religion.  You see, I am not a scientist.  Most of what I read related to science, I do not understand.  I certainly am in no position to judge or critique science that is presented to me.  Why do I believe in evolution?  Because scientists tell me that it is so.  I haven’t done any research into evolution.  I haven’t personally explored genetics, or selective breeding, or the fossil record.  I have read articles about evolution by other people who most likely aren’t scientists, but rather reporters.  If evolution were wrong (and just to be clear, I’m not trying to imply at all that it is), I would have no way of knowing that, or no grounds to even challenge the premise of evolution.  I could simply regurgitate responses that other people have come up with.  Other scientists. I accept the consensus of the scientific community on faith, because I have faith in the scientific process.  I have faith that mistakes will be sorted out in peer review.  I have faith that reproducability of an experiment is a major sign of veracity of the results.  But I’ll never reproduce the experiments myself, so again, I have to operate on faith.

Now this is not to suggest that scientists themselves are operating on faith.  They are qualified to analyze the results.  They are qualified to critique.  They are qualified to disagree, or disrupt.  And I recognize that if I wanted to, I could make myself qualified – perhaps that is part of why I’m writing this blog.  But I’m not, and I likely won’t.   No, I am going to keep accepting science with faith – ferreting out nonsense where I feel qualified to do so (as a religious person might ferret out those who are misinterpreting religious texts).

This is not meant to suggest the possibility that science is wrong about the big things, or that there is some reason to question evolution or climate change or whether medicine should be used to cure.  I have complete and absolute faith in the scientific process that led us to those things, and that the scientific process will continue to protect us and make our lives better.  But it does make me pause when I am listening to the argument against evolution, quoting the story of Genesis as proof that evolution is nonsense.  Because ultimately, my response, listing all of the facts and figures that can demonstrate evolution, is based on nothing more than my own faith in the scientific process.  So maybe, just maybe, I should stop feeling so superior when confronted with expressions of faith that I consider naive.

I should.  But I probably won’t.  Because as everyone knows, my faith is more founded than yours.

To Think or Not to Think. What is the question?

I’m currently reading a pair of books about artificial intelligence.  The Emperor’s New Mind was written by Roger Penrose in 1989, and is an exploration of how computers work, and why they will never be able to mimic the intelligence of the human mind.  How to Create a Mind is a book by Ray Kurzweil written in 2012 about how the human mind works, and how computers will be able to mimic those structures and reach a level of true artificial intelligence. I’m deliberately reading them semi-simultaneously to maximize the juxtaposition of time and technological advancement, which is stark.  But I don’t want this blog post to be a comparison of the two books or approaches.  Rather, I want to talk about the question that has been plaguing me since I started reading them.

What is intelligence?

More specifically, how do we identify intelligence in others?  If we’re thinking about what is artificial intelligence, what are our standards?  How do we know when we’ve made it there?

I am intelligent.  No one, and nothing can convince me of otherwise.  My logic for this pronouncement is trite and cliche.  I think, therefore I am.  I am intelligent because I can recognize my own intelligence.

So I know that I am intelligent.  But how do I know that you are intelligent?

Well, since you are reading these words, that provides a measure of intelligence.  But a computer can read this words as well, technically.  If you leave a comment, you demonstrate your ability to interact, to communicate based on these words.  To create something new, based on what I’m talking about.  You’ve probably noticed in your blog though, lots of spam comments that are computers doing exactly that.  But those computers won’t do it as well as you can.  I can recognize a spam comment versus one from one of you.  So that commenting ability shows your intelligence.  If we were in person, we could have a conversation about what I wrote about.  That conversation would demonstrate to me that you are intelligent.

In fact, conversation has long been one of the hallmarks of artificial intelligence, in something called a Turning Test.  The test is this.  In a blind experiment, can a human judge have a conversation with a computer and a real person, and identify which one is the computer reliably?  Essentially, can a computer mimic human conversation, and pass as a human?  And indeed, a computer named Eugene passed a limited Turing Test a few years ago(well, kinda sorta, ish, not really, but hey good effort).  Does that make Eugene intelligent?  Is mimicry enough of a requirement for intelligence?

I think not (as would most of the world).  Part of the problem that we face with artificial intelligence is that the most convenient way to test intelligence is by performing tasks.  How smart are rats?  Put them in a maze and find out.  How smart are monkeys?  See if we can teach them sign language.  But intelligence is not defined by task completion.  I am not intelligent because I can hold a conversation.  I can hold a conversation because I am intelligent.  I am not intelligent because I can write these words, and drive a car, and play chess, and write music.  I can do those things because I am intelligent.  A computer can do all of those tasks as well.  But can a computer do them because it is following a preset series of commands, or an algorithm created by a human to complete those tasks.  So in a world where more and more of the tasks that we might use to show our intelligence can be completed as well or better by a computer or machine, how do we identify intelligence?  How do we know when we truly have an artificial intelligence, and not just some combination of algorithms (which is of course assuming that we are more than simply a combination of algorithms, which is hardly a sure thing).

Perhaps the task that best shows our intelligence is the task of questioning.  My intelligence is not demonstrated by my ability to converse adequately with you.  My intelligence is demonstrated by my ability to question the world around me.  And out of those questions, manipulate the world around me.  We are intelligent because we can ask the question “What is intelligence” and then design experiments to try to figure it out.  In other words, we are able to not simply follow a series of commands to solve a problem.  We are able to identify the problem, and then create the series of commands to use to solve the problem.  Perhaps we’ll know when we have created artificial intelligence because the computer will start trying to determine whether are not we are intelligent.  Maybe God knew that He got mankind right when we started to question his existence?

Question for another day: Why are we thinking about intelligence in terms of human intelligence, based on human tasks?  Why wouldn’t artificial intelligence look completely different than human, with its own set of values and goals?

Only tangentially related, but just because I wanted to add a video: https://www.youtube.com/watch?v=9TRv0cXUVQw

Scientific from Birth

The other day, I was sitting with one of my twin boys on the floor.  He was sitting in a U-shaped pillow, and I was next to him in socks and sweatpants, just playing on the floor.  All of a sudden, he abandoned the toys that we were playing with, and started stroking my foot.  Then my pants.  Then his pillow.  Then socks, pants, pillow; socks, pants, pillow.  This went on for about five minutes.  I realized watching him that he was exploring the different fabric textures, trying to figure out what about them was different.  Very methodically, very carefully, and in the same order.  Socks, pants, pillow.  Socks, pants, pillow.  He’d thought through an experiment, and was carrying it out, trying to better understand the world.  He was following the scientific method to a T.  When he finally understood whatever it was that he was trying to understand, he looked up at me, laughed, and then got back to playing with his toys.

As it turns out, much of babies’ play time actually involves this methodical kind of experimentation.  The scientific method is something that we seem to have ingrained in us from birth.

A chart showing how babies follow the scientific method
Poster courtesy of: Tiffany Ard

And this phenomenon does not seem to be merely the delusion of a hopeful father projecting his shattered dreams of scientific brilliance upon his children.  It turns out, this is borne out by science.

According to an article on the National Science Foundation, multiple studies have shown that “Babies Are Born Scientists.”

“[Allison] Gopnik and her colleagues found that young children, in their play and interactions with their surroundings, learn from statistics, experiments and from the actions of others in much the same way that scientists do.”

So I wonder what that means for children who grow up and later either don’t like science, or don’t understand science (or at least think that about themselves)?  At some point in their development, something turned them away from their natural way of interacting with the world, took away the natural experimentation and style of learning that connects so well to skills in professional science.

Definitions or traditions? The Pluto Paradox

If you’re old enough to be reading this blog, then when you were in school, there were nine planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus (giggle), Neptune, and Pluto.  Which means that in August 2006, you felt the sting of betrayal when the International Astronomical Union (IAU) decreed that our beloved little Pluto was no longer a planet.  I’m sure there are many groups out there still claiming that this is all a plot by the Common Core, the liberal media, President Obama, President Bush, the tea party, or the academic elite to ruin and confuse the education of our precious little cherubim one planet at a time.  There’s even at least one petition begging that Pluto’s planetary status be reinstated by the IAU (though if you sign it, you can’t read my blog anymore).

Petitions and nostalgia aside, the debate about Pluto’s planet-ness is complete – Pluto is not a planet.  Why not?  There are a few reasons, mostly to do with gravity.  Pluto has always been an outlier in the cadre of nine planets.  Really far away, and much, much, much smaller than the others.  But Pluto isn’t actually all that lonely.  It orbits as part of an area of space known as the Kuiper Belt.

Kuiper Belt orbit. Credit: Don Dixon, by way of Universe Today
Kuiper Belt orbit. Credit: Don Dixon, by way of Universe Today

The Kuiper Belt is similar to the asteroid belt int that it contains a lot of debris – condensed bodies of matter that orbit the Sun in a specified area of space – there are over 1000 known objects in the Kuiper Belt, and counting.  .  Up until 2003, Pluto was the largest known body in the Kuiper Belt, and therefore retained its tenuous grasp on planethood.

In 2003, however, astronomers discovered the existence of Eris, a Kuiper Belt object larger than Pluto.  This led to a quandary of definitions.  If we were to insist that Pluto remain a planet, then we would have to declare Eris to be a planet as well.  And any other objects of similar size in the Kuiper Belt, of which there could be many that we simply haven’t spotted yet.  If the IAU went that route, then we could jump from 9 to 100 planets, most of which would not fit the one of the important requirements of planethood, which is to be gravitationally dominate in a region of space.  So instead of creating that level of future confusion, astronomers created a new class of celestial body, dwarf planets, with Pluto and Eris as the first representatives.  1

And the official planet count dropped to 8.

Enter Planet X.  Planet X has long been theorized, mythologized, and subsequently scoffed at by the scientific community.  Konstantin Batygin and Mike Brown (coincidentally the man who discovered Eris and thus “Killed Pluto“) of the California Institute of Technology (Caltech) in Pasadena say that this time, things are different, because this time, “we’re right.”

Planet X is a mathematically theorized planet outside of the orbit of Neptune with an extremely long orbit, about 15,000 years (or if you’re a resident of Planet X, one year).    As we’ve seen in our discussion of gravitational waves, much in science that deals with extreme size or distance, large or small, is only mathematically theorized.  Black holes, the Higgs particle for a long time, much of quantum mechanics, are either currently or were until recently only mathematically theorized.  Planet X is similar in that no one has actually observed it directly.  Like observing black holes by observing the consumptive destruction, Brown and Katygin theorized planet X by observing the strange orbits of 6 objects that orbit past Neptune.  Essentially, their orbits bring them close to the same area of space with quite a bit of regularity, too regularly for coincidence to be a likely explanation.  There’s probably something large and gravitationally dominant out there, tugging on their orbits.  Something like a Planet.

Showing the Orbit of Planet X versus the known planets. Source: Caltech/R. Hurt (IPAC), by way of NPR
Showing the Orbit of Planet X versus the known planets. Source: Caltech/R. Hurt (IPAC), by way of NPR

So the fate of the Terran Family of Planets hangs in the balance once more.  If Planet X does exist, then we’re back up to 9, with the little Kuiper belt dwarf planets separating the old stalwarts from the new, lonely inclusion. Statistical certainty aside, changing the number of acknowledges planets in the solar system is a process, a process of definitions, as we saw already with Pluto.  Planet X will almost certainly not be added to school-curriculum until somebody actually gets a picture of it, which could take a very long time, given the amount of area needed to scan to find it.  But someday, we may jump back up to nine planets, just a different nine planets.

This saga of definitions intrigues me for what it says about science, and in particular how science changes.  It’s important to remember that Pluto wasn’t downgraded from planethood because science was wrong about Pluto, or because Pluto somehow ceased to exist, or became something other than what we thought it was.  Pluto is still the same size, Pluto is still in the same place, Pluto is still everything that we thought it was.  The fundamental data about Pluto never changed 2.  What changed was the context in which we understood the data about Pluto. What had to change was our definitions, the language that we use to describe it, based on new information as our knowledge of far-off things grew.  Likewise, Planet X does not change what we already know about the solar system – it just adds something new.  Obviously, there are scientific discoveries that fundamentally change everything – the structure of the atom, DNA, electricity, etc.  But there are other discoveries that simply require a reevaluation of our language, a shifting of definitions.  Tradition and nostalgia might cause us to instinctively rebel against these sorts of changes, because they seem on the surface to be unimportant.  Why couldn’t we call Pluto a planet, and Eris not a planet, especially in light of the fact that we’re well on our way to discovering a whole new planet!?  And I think the answer is that tradition and nostalgia can get in the way of accuracy and specificity.  Words have meanings because we agree that they have meanings, and sometimes it is important to change our agreed-upon meanings so that we can accurately represent the world as we understand it.  Even if that means saying goodbye to an old friend in the registry of planets.

Or saying hello to a new one.

The Limits of Science

On Wednesday, February 10, George Perrot was released from prison, after spending thirty years behind bars for raping a woman in 1985.  In all that time, Perrot, and the woman he was accused of raping, have maintained his innocence.  On Wednesday, an appeals judge finally agreed, and set Perrot free, stating that:

“The record before the court, which I have subjected to rigorous examination, makes me reasonably sure that George Perrot did not commit those grave offenses.”

The judge also recommended that prosecutors not seek to retry Perrot for a third time.

The interest that I have in this case is not based on Perrot’s innocence – I’ve had no knowledge or interest in this case before hearing about it on NPR Thursday.  What I am interested in, however, is the reason that Perrot was released after all this time, and why the judge is so certain that Perrot is innocent.

That reason is science.  (Not specifically physics in this case, but at the end of the day, most science comes back to physics, doesn’t it?)

Perrot was convicted, twice, based largely on FBI hair analysis.  This was before more modern DNA analysis was available – in 1985, hair analysis was done by hand with a microscope.  In a stunning admission earlier this year, the FBI and the Justice Department “formally acknowledged that nearly every examiner in an elite FBI forensic unit gave flawed testimony in almost all trials in which they offered evidence against criminal defendants over more than a two-decade period before 2000.”  When they say almost all, they mean it. Of 268 cases reviewed, “26 [examiners] overstated forensic matches in ways that favored prosecutors in more than 95 percent.”

95%.

Now of course, not all of those 95% of cases relied solely on the hair match evidence.  But some did.  Like George Perrot’s.

Florence Graves of the Schuster Institute for Investigative Journalism summed the problem up nicely in an NPR article:

“Many people in the scientific community knew decades ago, including the FBI, that hair microscopy was used beyond the limits of science.”

Beyond the limits of science.  That’s a really interesting concept in and of itself.  What are the limits of science?  Science, is after all, the pursuit of truth, the pursuit of understanding how the universe works, and how we can make it work for us.  However, science isn’t perfect.  Science doesn’t have all the answers, and can’t explain everything.  Science is based on questions, and every new discovery leads to new questions, or a complete overturning of the answers to the questions that came before.  We understand more than we ever have, but we might also know more things that we don’t know than we ever have.

The problem we run into is when we let the hubris of our knowledge drive us to pretend that we know more than we really do.  Say, for example, in reporting hair matches with more accuracy than the science is actually capable of.  Or assigning causation when the evidence merely suggests a correlation.  Science allows us to make incredible discoveries and change our lives and the world around us, but it is not infallible.  Science is not all knowing, even if outside pressures (like political pressure to secure a conviction, for example.  Or monetary pressure from oil companies to ignore climate change) want it to be.  The limits of science are not actually limits of science – they’re limits to how we apply the science that we have access to.  Science is fallible because we are fallible, and because we want science to conform to our desires and expectations.

 

Multimodal Gravitational Waves

Scientists as part of the LIGO Collaboration have made the first direct detection of gravitational waves, and black holes.  Up until now, both gravitational waves and black holes have existed largely in the realm of mathematics, starting with Einstein’s ideas of General Relativity.  Black holes could be observed indirectly by witnessing their consumptive, destructive effect on light-producing matter around them like stars, but because they don’t emit light of their own, they are all but impossible to see.  Gravitational waves have been even more invisible, because of just how faint they are.  But on the 14th of September, 2015, all that changed, and scientists at two facilities of the LIGO Collaboration, using a complex laser detection system, independently and near-simultaneously detected the gravitational waves of two black holes colliding a billion light years away.

The BBC has a fantastic group of articles explaining the discovery, the underlying science behind it, and why the discovery matters.  In a nutshell, gravitational waves are caused when objects with immense gravity, such as stars or black holes, suddenly have their gravity wells changed.  The disruption of space time sends ripples across the universe, much like tossing a rock into a pond.  Gravitational waves, once they get moving, are essentially unhindered by anything in their path – they pass through just about everything as if it weren’t even there.  This means that they can travel immense distances (a billion light-years, say), without degrading.  Scientists were finally able to detect them by using a laser system capable of measuring the minuscule fluctuations in space-time caused by these waves (less then the size of an atom). The discovery has finally confirmed the last piece of Einstein’s relativity, the mathematical framework upon which we understand much of the universe.

These articles, which all work together to make the information as accessible as possible, are rather rhetorically complex, and I thought it would be interesting to take a look at some of the ways that they are communicating this information.

The primary article from the BBC functions as one would expect – as an overview of what is happening.  It’s interesting that the BBC is working so hard to establish a certain ethos not for themselves, but rather for the discovery itself.  Multiple times throughout the article, they mention how this discovery is destined for a Nobel Prize in the same way that Luke Skywalker was destined to confront Vader.  Even if few people can name Nobel Prize winners in the sciences, everyone knows that Nobel Prizes mean big, important discoveries.  It’s worth noting, however, that this over-the-top Nobel shilling seems reminiscent of media pundits’ insistence that the upcoming presidential race would inevitably be Clinton and Bush – which is looking unlikely on the Republican side, and less of a landslide than expected on the Democratic.

To help establish ethical gravitas for their Nobel pick of the year, BBC also listed other discoveries that are on the same tier of importance – the discovery of the Higgs particle (though do people really know that one?), and the discovery of the structure of DNA (everyone knows that one – admit it, you immediately pictures the double helix with different colored rungs, didn’t you?).  They also have a separate page of “Reaction: Gravitation Wave Discovery“, with quotes from scientists you’ve never heard of about why this is so important.  Finally, they have the godfather of black holes himself, and arguably the single most recognizable scientist in the world, Dr. Stephen Hawking, featured both in the article and in his own video.  Even though he actually had nothing to do with the team that made the discovery.  The fact that Hawking is excited means that the rest of us should be excited too.

Very well.  I believe you, BBC – this is important, because all of these scientists say it is!  Well done.  However, I’m still confused.  Help?

Fortunately, BBC anticipated that as well, and has created a decently multi-modal experience to help explain what’s going on.  First and foremost, they’ve taken full advantage of one of the major affordances of web-based articles by creating multiple pages about different facets.  Rather than trying to explain everything in one large, clunky, difficult to understand article, they break it down.  The primary article has enough information that you can get it, but doesn’t go into huge detail.  Another article, which we’ve already talked about, showcases reactions.  Yet another article features anticipated questions that people might have about this discovery, with digestible answers that get into the more nitty gritty of the science.  They also have:

Unfortunately, this plethora of articles all about the same subject are only sort of connected, in the auto-generated “Related articles” section.  Only the primary article features links to all of the others – BBC missed an opportunity by not creating a dedicated landing page where all of this information could be accessed quickly and easily, without scrolling to the bottom of a several thousand word article.  Even with that said, however, the breadth and depth of the content that they’ve created around this discovery simply enhances the ethos of the discovery.  If the BBC is willing to spend this much time, money, and energy on these gravitational waves, then they must be important.

Aesthetically Scientific

Back in January, NPR’s Science Friday did a story about Steve Erenberg, a man who collects old scientific equipment. To go along with the radio interview, in wonderfully multimodal experience, Luke Groskin, the story producer, created a video interview/tour of Steve’s store.

Something that sometimes gets lost when confronted by technology is the purely aesthetic value of that technology, and what those aesthetics are saying.  One of my favorite lines from the video with Steve is about the importannce of aesthetics.

Quack devices are “designed to be better looking than their purpose…The more important it looked, the better people thought it worked, and the more money the doctor would get.”

Aesthetics matter.  Are you going to be comfortable jumping into an MRI machine with rust?  How about one with teeth?  Probably not, right?  Part of the job of technology is to convince people to use it, and we use technology that looks, feels, sounds usable.  If no care were given to aesthetics, then how many inventions would come to mass market?  The aesthetics of our technology, ie not having MRI machines with teeth, play an important rhetorical role in establishing ethos for the equipment itself.  Such and such looks good, looks modern, looks sleek, and therefore must perform its function remarkably well.

Another connection that I really enjoyed was made by Luke Groskin, the producer.  He made the comment in a later interview with Ira Glass that:

“You can see the art movement at which they [the science pieces] were created – the art movements during which they were created in the actual pieces.  So you can see expressionism in these dentistry practice anatomical models…You can see Victorial clawed tables. You can see more modern tin and aluminum, very sleek, very very simplistic designs. “

In other words, the aesthetics of science and technology are a product of the culture in which  they exist.  Some of these devices in the video might look like torture devices to us now, because that’s what our cultural language has decreed – torture devices are old and antiquated, not a product of our modern society, and many of the slasher films that we watch are based on technology of the past.  They look terrifying to us because of our cultural climate, but we can’t every forget that our things only look the way that they do because of our cultural expectations.  We all only need to look at yearbook photos to see how quickly cultural expectations are.  So we need to perhaps be aware that Now is not forever, and that modern gives way to antiquated very quickly. What will our great great grandchildren think of the absurdly clunky and spacious laptop computer that I am typing this article on?  As Steve Erenberg Summarized quite nicely:

“That’s what science is.  We always think state of the art and we’re ahead of our time and it’ll never get anymore modern than that, but it’s always changing.”

Science is not static, but is perpetually changing, as is every other facet of our culture.  And that change is not a bad, or scary thing.  That change is called progress, and is the reason that the “quack devices” as Erenberg calls them are not still in use.  Also, let’s remember that “Quack devices” might not necessarily be correct – just because something didn’t work doesn’t mean that it wasn’t valuable.  After all, the contribution of failure to scientific progress is extremely important.  Science changes, culture changes, and hopefully, those changes are for the better.

When Failure Counts

On January 18, 2016, SpaceX attempted once more to land their Falcon 9 rocket on a barge in the ocean.  The resulting failure was pretty spectacular (there were no people aboard, and no one was injured):

Throughout most of the history of SpaceX, Elon Musk has been quite public about his company’s many successes, and occasional misfires/spectacular explosions.  Let’s remember what the SpaceX team is trying to do – land a rocket that has flown to space and delivered a payload, in an effort to make rockets reusable, and thus bring down the cost of spaceflight.  Before SpaceX, this was something that had never been done before.  And prior to this attempted landing, SpaceX has succeeded several times, with varying levels of both precision and difficulty in the attempt.

No matter what, however, within days, Elon Musk releases the video of the attempt.  In the case of the failed landing and explosion video, he accompanied it with a simple, brief technical explanation of a possible cause:

“Falcon lands on droneship, but the lockout collet doesn’t latch on one the four legs, causing it to tip over post landing. Root cause may have been ice buildup due to condensation from heavy fog at liftoff.”

No attempt to make excuses, or assure anyone that this will never happen again, or cowtow to shareholders in the company.  What SpaceX is attempting to do is really, really, really hard, and explosive results are an inevitable part of that.  Can you imagine GM releasing that kind of statement if one of their brand new concept cars didn’t start at a car show?  Of course not – there would be apologies and finger-pointing and people would be fired.  Think back to the Obama administration’s investments in renewable energy companies a few years ago.  The companies that we remember are the ones that went bankrupt after the Department of Energy loan – Solyndra, Fisker, and Abound.  Companies who received the assistance defaulted on $780 million dollars – which is only a 2.28% default rate.  That means that over 97% of the loaned money was not defaulted on.  In fact, a few years later, and the government is turning an overall profit from those loans.  But still, the thing that we remember about that program, and the thing that many people judge that program on, are the few companies that did fail, the small percentage that did default.  This is demonstrative of a failure-adverse culture that is becoming prevalent.  Many teachers I think would recognize this in their students – top students are looking to be told what to do, how to get an A, rather than exploring and experiments and risking the possibility of failure.

However, failure is a major part of success.  Progress cannot be made without risk, and with risk comes not the potential, but the reality of failure.  Greatness and progress come when failure is overcome, as Elon Musk and the SpaceX team are doing.  While I can’t speak for Mr. Musk’s motivations in posting videos of failure, I can see a potential attempt to shift our thinking.  By sharing his failures publicly and with no apology, Mr. Musk is embracing his failures, learning from them – and helping us to learn along with them.  I sincerely hope that he continues this trend with all of his companies, and that other companies who could push the limits take comfort, and perhaps a bit of courage, in witnessing the failures of others, so that they can equally embrace and learn from their own.  After all, that’s how we progress as a society.

And besides.  When Elon Musk fails, it’s usually accompanied by a big explosion.  Which always makes for a fun video.