Exploring the world of software development, technology, data science, and life

AI and the Future of Programming

I was listening the latest Java Posse roundup recording on the “Future of Software” in which the topic of AI came up, and couldn’t help but chime in on a few things.

The basic question being posed was, will computers eventually become smart enough to write their own software.  Some argued no, that there are fundamental limitations on how computers work that will prevent them from, say designing usable user interfaces.  Others argued yes, a sufficiently advanced computer could do anything a human being could do.  Still others argued that they already were writing their own software, at least relative to computer programming was fifty years ago.

My answer, yes, and no.

Yes, I would agree that barring a fundamentally dualistic nature of mind and matter, a computer can (theoretically at least) do anything a human can (Ray Kurzweil has plenty to say on that subject if you are interested).  So given sufficiently advanced technology, you could develop a computer that does everything we do.  Yes, that technology will likely have little in common with today’s computer chips, as the human nervous system has a fundamentally different architecture than modern day computers.  But that doesn’t mean its not possible to develop a computer system based on those architectures (though I suppose one could argue that then the term ‘computer’ wouldn’t be the best description of it, as arguably ‘computing’ isn’t what it would be doing).

However, just because we could do something doesn’t mean we would do it.  What exactly would be the point of designing a computer identical to a human being?  We’ve got too many real life human beings running around already.  What we would want would be a computer better than human beings.  We wouldn’t be designing them for the hell of it, we would be designing them to solve problems we have.  So certain aspects of human nature wouldn’t make sense to duplicate.  Hatred, as an obvious example.  Panicking in severe situations is another.  And most importantly, freedom of desire.

I’m not going to design a computer program or robot that is going to want to serve its own desires, at least not in the way humans do.  I am designing it to serve my desires.  Sure, given what is said in the previous paragraphs, it should be possible to design a real life Hedonism Bot.  But why on Earth would anyone want to?  To be useful, it would have to be designed to care about its maker’s (that’s us!) desires.  And that brings us to a part of the software engineering process that will have to continue to be owned by human beings; the creation of needs for which the software will be created for.

Even that isn’t as trivial as it sounds.  I don’t care if the automatic software generator is ten times as intelligent as human engineers, its still not going to be able to solve the problem of being given requirements that are too vague any more than its carbon based equivalents could.  In order for it to generate the software, the requirements for that software are going to have to be drawn out.  What is the desired flow?  How should it handle errors?  What special cases is it going to need to handle?  And developing these requirements is going to end up being the programming of the future.  Its probably going to be much more natural than what we write today, just as what we write today is much more natural than the programs written fifty years ago.  But there will remain a human element.

Leave a Reply

Your email address will not be published. Required fields are marked *