While the world marvels at the wonders of science, the Internet, even at the prospective rise of drones the question remains – is Africa living up to the digital age or playing second fiddle? Nii Commey argues that it is about time Africa rose to the challenge and developed its own digital responses.
In 1958, the US mathematician, physicist, inventor and polymath John Von Neumann described how technological changes and the ever-accelerating progress of technology give the appearance of our approaching some essential singularity beyond which human affairs as we know them could not continue. When science fiction writers latched onto the idea, the stage was set for life to become stranger than fiction. And when the futurist Ray Kurzweil cited Neumann’s use of the term singularity in his book The Computer and the Brain, the concept was popularised.
But what is singularity? Otherwise known as exponential technology, it is a theory that the speed of technological development will cause a runaway effect such that artificial intelligence will outpace human intellectual capacity and control. This will radically change civilisation in an event called singularity. Thus this interaction with hardware and software may cause events to become unpredictable and unknowable.
“With the advent of robots that can talk, play soccer, and manage your house, we might end up becoming the guard dogs, while they watch TV,” jokes Dr Kwame Amuah, a nuclear physicist and entrepreneur, who believes the time is ripe for Africa to come up with its own technology initiatives.
He stresses, however: “It is time to unleash the African genius.”
But where is the African challenge? The Johannesburg-based Singularity Institute Africa, founded by Dr Amuah, hopes to answer that question. In early February the Institute launched its website and Facebook pages to draw attention and interest to its work. There are seminars planned across the continent this year aimed at harnessing the African genius and contesting the digital space. The implications for development and growth are enormous.
Rightly so. George E Moore, co-founder of the Intel Corporation, observed and postulated in 1975 that over the history of computing hardware, the number of transistors in a dense integrated circuit doubles approximately over two years. His observation largely proved accurate, and has become famous as Moore’s law, which effectively means that the speed of technology doubles about every two years. Intel Executive David House refined the time period to 18 months. At that rate exponential growth of technology becomes very rapid over time. This improvement has dramatically enhanced the effect of digital electronics in nearly every segment of the global economy, from banking to photography.
Many technological advancements are now being taken for granted. The speed of computing of a smart phone has become commonplace, and it is a distant memory that its computing prowess surpasses that of the space capsule Apollo 11, which took man to the moon in 1969.
Smarter, and smarter tech
The social media revolution means anybody can connect with everybody in an instant. Drones are now hovering in the skies, and companies like Amazon are already planning to use them for deliveries. Already they are being employed in film making, news gathering, search and rescues, archaeological surveys, forensic photography, fire-fighting, and much more.
Robots in South Korea are serving meals at restaurants. The first robot-run hotel is due to open in Japan this coming July. Some robots now run faster than Usain Bolt. Some robots are performing surgeries quicker and safer than doctors. Robots are being used in law firms to sort out documents. Smart watches or wearables are doing the rounds, as 24/7 GPs, monitoring your blood pressure, blood sugar and every conceivable disease, and acting as an early warning system for your health.
There is cloning and the science that postulates the possibilities of immortality. Kindergartens now employ teacher robots and caregivers that respond “emotionally” to the kids. Self-driving cars are upon us, sleek and more efficient. Eventually, transactions may be done virtually, and the car will deliver itself. One wonders what developments there will be in the area of basic instincts like sex. However, the nightmare scenario is when robots start invading countries to take over governments. The possibilities are endless.
The implications for jobs and professions have occupied numerous articles, with fears of redundancies, more so in professions like medicine, architecture and engineering. And who knows, the age of digital justice might not be far away.
But there have been a fair share of myths. One of them is that artificial intelligence (or artilect) is a computer or machine that literally has a mind of its own, is aware of you and can possibly speak to people about its preferences; thinks just like humans and can set its own goals and execute them; is like a person except that person is inside a computer; can do whatever a person can do; and can lie, persuade, strategise, philosophise and antagonise.
There are indeed so many assumptions about artificial intelligence, which are not necessarily true.
These assumptions are largely born of silly old-school science fiction movies and cartoons. They have deceived people into not taking the subject of artificial intelligence seriously enough, and dismissing it as science fiction.
A computer that can think and “feel” may sound like a purely silly concept confined to the silver screen, meant to entertain children, scare the faint-hearted, or merely to amuse. It possibly conjures up images of terminators or transformers. Perhaps you have seen one too many shows about computers that get struck by lightning or a strawberry slushy that suddenly comes alive.
The age-old cliché begins, where the machine asks questions about love or souls or feelings. Perhaps it doesn’t want to die so it attempts to kill everyone, or maybe it falls in love with the protagonist and locks him in the control room forever, forgetting that humans can starve to death.
At this point most of us are putting artificial intelligence on the same shelf as UFOs, flying cars and lizard people. But flying cars may become reality sooner than later. This is where fiction may converge with fact.
Drones for pets
US army soldiers have been found to be taking an unexpected interest in their drones. Some of them have started to treat robots like pets, friends, or even as an extension of themselves. Many of them have named their robots after a celebrity or current wife or girlfriend. There are reports that some of the soldiers feel strong anger and sadness when their field robot has been destroyed or disabled in battle. Some soldiers have even held funerals for their robots.
This is largely an example of anthropomorphism – projecting human qualities onto things that are things. Anthropomorphising helps people to make sense of unusual and complex entities. If such entities do not share any obvious characteristics with them, then they will find a way to make them seem more human by projecting humanity onto their actions.
It is no wonder then that science fiction authors and audiences alike tend to envision robots as individuals with feelings, with emotions and desires similar to what humans have.
It doesn’t help that artificial intelligence is usually portrayed in a humanoid robot form. Even today, robots like the famous Japanese model, Asimo, are being used as PR tools for a future robotics industry.
Kurzweil predicts doomsday in 2045. Some say it will be sooner, some later, when out of control machines with far superior intelligence begin to rule, omnipotent. If a nightmare scenario begins, who knows if the African continent will not be the theatre for the premiere. And after all, Africa is where it all began, and where it may all end.
Nii Commey is a third-year Cognitive Science student at the University of KwaZulu-Natal, South Africa, and a member of the Singularity Institute Africa.