Kumud M. Srinivasan shares her admiration for technology and wonders why there is growing apprehension surrounding technological advancements.
I am a tech-world insider, and when I say inside, I mean three-plus decades working ten-hour days side-by-side with geeky engineers and straight-talking managers and hard-charging executives on faster, cheaper, tinier processors—the brains of computers—to transmit, store, and analyze ever-increasing amounts of data. Together, we have created an industry that churns out products the world hasn’t known it needs but can no longer live without; an industry that has made work a mystical experience; an industry that has turned our lexicon upside down, forever changing the meaning of tweets, bugs, viruses, cookies, and spam; an industry that has made it cool to be nerdy and that has made it cool to be me, too.
My lens on the world of technology has made me an admirer of the human capacity for problem solving and engineering. Not once, in all my years working with these brilliant, passionate people, have I doubted our intent to do the right thing.
So, it perplexes me when excitement over technology becomes passé, and cynicism and paranoia become the overriding sentiments. The turning point that stands out for me is The New Yorker November 2016 article “Silicon Valley Has an Empathy Vacuum” by Om Malik (founder and a former senior writer for GigaOm, a blog-related media company) in which he accuses Silicon Valley of a “distinct lack of empathy for those whose lives are disturbed by its technological wizardry.” In February 2018, a former Google design ethicist co-founded the Center for Humane Technology to shed light on “how technology platforms were hijacking our minds and society.” Recently, on August 21, 2019, The Wall Street Journal carried an article by Ellen Gamerman on how fictionalized versions of big tech firms are being cast as the bad guys in a slew of novels and movies. On August 28, 2019, Josh Hawley asserts in The Wall Street Journal that Silicon Valley’s giants are no longer innovating; instead, they are indulging in “ever more sophisticated exploitation of people.”
“Lack of empathy,” “hijacking minds and society,” “the bad guy,” “exploiting people” … how did this happen?
Our apprehension over our gadgets’ addictive ways is compounded by the ever-growing, largely-monopolistic presence of the companies behind them. In QualityLand, a satirical German novel that debuts as a series on HBO in January, we meet TheShop, an online retailer with an all-powerful algorithm that enables it to know what everyone wants, before they themselves know it. Those who want to rebel against TheShop’s algorithms and its world, as does the protagonist, find themselves isolated and persecuted.
The corrosive effects of fake news and hate sites on social media have been devastatingly real, as we have seen in elections and hate crimes around the world. The internet, the phenomenon that almost single-handedly made the world flat—one giant, open, networked community—has splintered us into divided and divisive camps, each a giant echo chamber. Even the holy grail of making computer apps as human-like as possible is losing its halo effect, with these apps fueling our existential threats. On November 10, 2017, The Wall Street Journal published an article titled, tongue-in-cheek, “How to Survive a Robot Apocalypse: Just Close the Door.” Almost daily, one reads articles in the media with inflammatory titles about how robots are coming for us and our jobs.
Why is tech getting such a bad rap? Isn’t technology good for us?
Did you know that if you visit the Tipsy Robot bar on the Las Vegas Strip, you will be greeted by a couple of bionic mixologists? They won’t commiserate with you, listen to your problems, or adapt their style to match your mood. But they’re fast, they don’t get tired or grumpy—and they also dance. “Robo” shots and a “Bionic Bomb” are among the many concoctions they mix.
As reported in the March 17, 2016, article “Europe Bets on Robots to Help Care for Seniors” written by Nick Leiber and published in Bloomberg Businessweek: “Retiree Maurizio Feraboli taps a grocery list into a tablet and sends wheeled robots to retrieve food from a store near his apartment outside Pisa, Italy. His neighbor Wanda Mascitelli directs robots to grab the trash from her kitchen and drop it into a dumpster on her street. A robot also warns Mascitelli about a possible gas leak and later brings her a glass of water and a bottle of vitamins.”
Sawyer and Baxter, two surprisingly sophisticated “cobots”—collaborative robots—work on the factory floor of Tuthill Plastics Group, an injection-molding company in Clearwater, Florida. Sawyer, a one-armed cobot, and Baxter, his two-armed colleague, are equipped with cameras, touch sensors, and screens that display human facial elements. Their “faces” not only endear them to their human counterparts, they help them signal their intent through human-like expressions. Today, they perform with dexterity the repetitive tasks asked of them. Soon, they will anticipate and correct for their colleagues’ errors by “reading” their brain signals.
It perplexes me when excitement over technology becomes passé, and cynicism and paranoia become the overriding sentiments.
Such fun and useful little gadgets! So what if these smart machines are supplanting us, especially in low-skill jobs made up of routine tasks?
Let’s be honest: robotic technology is far from a bad thing. Robots, by enabling us to focus on high-value tasks, allow us to think of work in new and exciting ways. In fact, their evolution might be just what we need.
In my own team, data scientists—people who write AI code that automatically detects sophisticated patterns in data and takes appropriate actions—are highly sought after, as chip-designs become more exacting and design engineers start relying on AI to correct for their omissions and to augment their expertise. Universities, in turn, are scrambling to create Data Science departments and programs. This is how the economy moves forward.
I admit that such changes, on a broader scale, can be disruptive, especially if we believe the claims of some experts that a mere 20% of today’s workforce has the skills needed for 60% of the jobs that will come online in the next five to ten years. Re-skilling and mid-career transitions will need to become a way of life. Employers will need to be proactive and generous in the retraining of their employees. Revolutionary ideas like the universal basic income to facilitate job transitions, or even to enable people to redefine themselves, will need to take off. And we will need to become comfortable with identities that aren’t wrapped around our professions.
That is a tall order. But if we have the wherewithal, we will do it. A more daunting challenge, in my mind, is our guardianship of this technology, given our dubious track record. In the invention of weapons of war, we have often extended ourselves to the point where we cannot control our own inventions. The use of the internet for fake news is not something we envisioned but now seem unable to control. Facial recognition technology can ease the inconveniences around security checks but can be deadly in the wrong hands.
Autonomous machines with no human operator “in the loop” could be hazardous. The robot in the movie Robot & Frank could graduate from a mere assistant to a thief who would take off with the jewels. Yes, we could “teach” robots to make decisions, give them algorithms that would differentiate right from wrong. But we would need to agree on moral standards that would fit in with social norms: they must not harm us; they must obey us, unless by doing that, they would harm us; and they must protect themselves, without harming or disobeying us.
A few prominent leaders—Elon Musk, the CEO of Tesla, and Stephen Hawking, the late English scientist, for example—have raised the clarion call. Employees too are taking on the cause. Google reportedly has not renewed its contract with the Pentagon to develop artificial intelligence for drone video analysis until ethical guidelines are clearly defined. The decision followed objections by thousands of staff worried that Google’s technology could be used to kill people.
But the tech industry delivers cool products. It doesn’t own how people choose to use these products, does it?
I recall a conversation with my team. We were frustrated by routine user errors on one of our products. “They just don’t know how to use it,” we lamented. But, is that not our problem, we eventually asked of ourselves, before proceeding to redesign the app and make it more intuitive, adding prompts and drop-down menus, reducing the number of clicks, and building personalized intelligence into our error messages.
The tech industry must care about more than just its wizardry. The buzzing of push notifications, the nagging red bubbles on apps, and endless feeds keep us constantly engaged with our devices and make them really hard to put down. But are they making our lives better? If the products we deliver create addicts of children, we must do something. If the products we put out there fan hate and facilitate hate crimes, we must censor ourselves. If designing our products call for new skills on a massive scale, we have to step up retraining, so large swaths don’t get left behind. Until we are ready with moral standards for drones and droids, we need to put the brakes on, and alter laws, to ensure that these machines are not prematurely put in dangerous situations.
Technology throughout history has been a force unstoppable.
In 1589, Queen Elizabeth refused to grant a patent to the inventor of a mechanical knitting machine, for fear of putting manual knitters out of work. In the early 19th century, bands of angry Luddites smashed up the steam-powered looms that were throwing hand-weavers out of work. In the late 19th century, historians blamed the bicycle for disrupting well-established social norms—aiding female liberation and ending familiar class divisions—by making us more mobile. We have, it seems, harbored doubts about technology all along. And yet, throughout civilization, we have pursued its ceaseless evolution, even as our paranoia has played out.
And why not? It is the agent that has single-handedly revolutionized life and multiplied conveniences and luxuries hundredfold. I think about that when I stride towards my gate at the airport and my Fitbit buzzes, an IM informing me of a late gate-change, and without missing a beat, I alter my direction, and keep going. Technology! To me, it is still magical. I hop onto the treadmill in the morning and tune into a podcast, a deep-dive into some esoteric topic, and emerge thirty minutes later energized physically and mentally, from that no-cost, fun engagement. Today, our personal life is so pervasively enriched by the technology we have developed that we tend to take it for granted. I see technology’s hand and the ease with which online communities cross regional and geographical barriers—yes, that same internet that is also dividing us—in the nonjudgmental ways millennials at my workplace relate to each other across continents and cultures.
Already, we are on to the next big advance. Brain-computer interfaces will soon combine bioengineering and machine learning to control artificial limbs. Melissa Loomis, a woman from Canton, Ohio, who had her arm amputated in 2015 after being attacked by a raccoon, can sense touch through her prosthetic limb, a first for a patient in the United States, thanks to the use of a brain-computer interface. At some point, these technologies will become good enough that they will offer the prospect of augmenting our finite, human abilities. Bionic Boots can strap onto a person’s legs and boost their normal ability to Olympian levels. Ian McEwan’s Machines Like Me, where the author imagines a society with robots living among us, as helpers, friends, even lovers, already feels relatable and realistic.
Five years ago, during an expat assignment in India, I saw firsthand how this industry was transforming that country. Raj, my “house-boy,” had little interest in his job. Tasks would go undone—getting a leaky faucet fixed, taking care of my laundry. When I would ask why, “I forgot” is what he would say. In the morning, I would see clutter in the kitchen from the night before; his comeback, when I would point it out, would be a petulant, “But I get to it in the morning.” One evening, when I checked my phone before confirming I wouldn’t be home for dinner the following day, he asked, “How did your phone tell you that?” His eyes lit up when I showed him. The next day, he came over when I was reading the newspaper and asked shyly, “Do you have a Facebook account? Will you help me set it up?”
India had leap-frogged into the mobile era, with mobile phones as pervasive as televisions, even in slums. My in-laws communicated their needs to their “sabji-wala”—fresh vegetable hawker—on his cell phone. My sister-in-law willingly gave out her mobile number to stores to get sales coupons directly on her phone. Chennai residents used Facebook to help each other out during the devastating floods from the 2015 Hundred-Year rains. Affordable data services made government services accessible to the masses. From the financial inclusion of rural women to bringing health care to poor rural masses to transforming India’s beleaguered urban centers into smart cities, digital technology was becoming the one-stop solution to that country’s labyrinthine challenges.
I realized, during a panel discussion in Delhi on technology and its impact, that India had turned a corner. The discussion kicked off on a combative note with an out-of-turn question yelled out by someone in the audience, “Isn’t technology destroying our culture and our civilization? How will it end?” The question was deftly countered by a fellow panel member with a defiant, “My granddaughter lives thousands of miles away, but I still get to talk to her every day, thanks to technology.” The audience applauded. India may have been late to the party, but romantic notions of the pre-technology era were rapidly becoming passé.
But technology is indeed a double-edged sword. Our doubts and fears are real, as are our tremendous gains. For technology to continue to be the fruit of our natural compulsion to move forward to better things and to higher reaches, we need to extend our creativity and love of problem-solving beyond product engineering. We need to address such socio-economic challenges as job protection, income inequality, tax policies, universal basic income, product impact, and corporate governance and oversight. That is how technology will truly make our lives better, so we can in 100 or 200 or 300 years from now witness not the demise of humans, but the Homo Deus—“Human God”—that author Harari envisions.