Diving Deep Into Artificial Intelligence With Katja Grace

As the world continues to evolve, artificial intelligence continues to become more prevalant. We sat down with Katja Grace to discuss what influence it is having on us.

Katja Grace runs AI Impacts, a small multidisciplinary research group concerned with long-run risks from advanced artificial intelligence.

The mission of the group is to answer decision-relevant questions about the future of artificial intelligence.

Their 2016 survey of machine learning researchers was the 16th most discussed academic paper in 2017 according to Altmetric. 

Is artificial intelligence a boon or a curse? In part one of this interview with the eminent researchers Katja Grace we discuss the motivations that attracted her to the field of AI,

the opportunities for today’s youth in the specialised area, where we come across AI on a day to day basis, whether AI will surpass human intelligence and the dangers posed to humanity due to AI.

 

AI Motivation

Ms. Grace, first of all let me thank you for this opportunity for an interview with you and let me begin by asking what were the motivations that drew you to the field of Artificial Intelligence?

My pleasure!

As a young teenager, I heard that there were people like me in terrible situations overseas, and that I could help them with money. I resolved to save my money to help them, confused about why others spent precious, life-saving dollars on magazines and movie tickets.

But setting out to put my money where it could make the most difference, a more general question presented itself: were there projects outside of global poverty where my resources could do even more good?

On the lookout for such callings, I came across the community concerned with advanced artificial intelligence posing a risk to humanity’s long term existence.

I came to agree with them that mitigating risks of human extinction (or ‘existential risks’) should top my list of priorities.

The future is so long: a lot of generations might come after us, and if each generation hands on a better world, our descendants could live incredible and meaningful lives. If we dropped the ball and ended it, it would be an unimaginable loss.

Yet I think we do risk this: in the last century, our technological mastery has reached genuinely dangerous levels, where destroying humanity isn’t some thought experiment.

Advanced AI is often considered the riskiest upcoming technology by people thinking about existential risks, so I wanted to understand better whether there is a real chance of AI destroying humanity, and if so, exactly how this might happen, so we can better figure out how to avoid that.

 

But, What Is Artificial Intelligence?

In lay person’s terms can you describe AI?

‘Artificial Intelligence’ is machinery that manipulates information in useful ways, especially the kinds of ways we typically think of as the domain of the human brain.

For instance, taking in numbers from a light sensor and putting out the label, ‘dolphin’, or being given a chess board, and producing a good move to make.

 

What are the opportunities for the youth of today with reference to AI?

AI is going to change the world a lot, and there are ways this transition could go better or worse, across many domains: it’s an open question whether many people will end up unemployed without support, or powerless in the face of intense and mathematically perfected political forces for outcomes with no real value, or even dead.

There are lots of opportunities for understanding the situation better (e.g. the kind of thing I work on), and working to make it more likely to go well.

For instance, I know a lot of people working on ‘technical AI safety’—trying to find ways to design AI systems themselves that reduce the risks.

And I know other people working in policy and governance, on international relations, thinking about institution design, and doing research supporting these activities. It’s worth noting that a background in artificial intelligence is not required at all for this. (You can apply to work at my org, AI Impacts, here.) 

 

Changing Lives

What are the day to day examples of AI touching the lives of the average individual?

One area I’ve really noticed AI in my life lately is personalized recommendations and ads—I don’t know how the Facebook algorithms work, but some of their ads are surprisingly on point. 

When self-driving cars become widespread, that’s going to be huge.

How do you see AI progressing in the foreseeable future?

I think within our lifetimes AI could become better than humans at all mental tasks.

 

Breaking The Code

Does AI require coding?

If you are building AI, at some point some coding has to happen. But the main way that AI is made these days is for the AI algorithm itself to not be hand coded. Instead, people write the programme that makes the AI code. That programme works by starting out with a random program, in the form of a ‘neural network’ – many nodes doing simple calculations, connected together – and then makes a massive number of tiny iterations, each one moving that random program a tiny bit in the direction of being better at the task, until it is potentially really good. 

This means AI researchers are still dealing with code, but it isn’t the code of the AI itself: it is the code of the AI-builder.

For the kind of work that I do, studying how AI will affect society, no coding is required.

 

DIY

In the future will AI do away with the need for engineers and do the coding by itself?

Most likely. I think at some point, AI systems will be better at coding than any human engineer, since it’s hard to imagine that the human brain is the best physical system for coding.

And if eventually we get there, it is hard to imagine that humans continue to do this work, if software that does it better is available (and probably cheaper).

What are the dangers of AI outperforming human intelligence in the future?

A big concern is that AI systems will tend to have objectives that they were given in the process of us making them helpful, and that once they become better than humans at everything, they will be in a position to adeptly carry out those objectives beyond our desire for them, and in conflict with our broader values. 

This is a more intense version of a common concern with corporations. A corporation is a structure created by humans to be helpful, but often set up with narrow goals that don’t represent everything we care about—for instance, a corporation might be constructed to prioritise making profits, perhaps via making paper bags.

Image from iOS 10
AI Expert: Katja Grace

If the corporation makes some paper bags, everyone might be happy, but problems can arise in the long run if the corporation strives to make paper bag profits even when this destroys other things that we care about.

For instance finding sources of cheap paper even though they are known to be destroying valuable forests, marketing to encourage fear of insufficient packaging, lobbying to prevent laws that would close loopholes it is exploiting.

And having created this corporation, there is no way for bystanders to ‘turn it off’, unless it runs badly afoul of the law—but all the while, if it is well designed to maximize profit, then it is intelligently judging which illegalities are safe, and creatively avoiding trouble while exploiting anything it can, and building its own social power, and renting lawyers.

This is the kind of problem we have seen with corporations that are made of humans, but it could be much worse if the narrowly profit-oriented entity doesn’t even have humans inside it to say ‘oh hum, I don’t like where this is going, and I’m not going to do this thing that would make us money, because that is messed up’.

With advanced AI, the entire entity could be automated, so that there wasn’t a human in there to become disillusioned – only algorithms coldly and relentlessly assessing what can be most profitably got away with.

 

Human V AI?

Ultimately, I don’t think it is hard to imagine the entire human race being disempowered. Many humans now struggle for their survival in economically marginal circumstances, and enjoy little respect for their wishes, and little control over the future.

I think it’s possible that the whole human race could end up in that situation, with the loci of power in the world fully mechanized. The automated processes need not be conscious or human-like, and their power grab need not be personal or conceptualised in that way, to be incredibly real-world effective.

We are used to automated systems being clumsy, blind and foolish, but the point is that mastery over the rich detail of the physical world is a computing task like any other, that will become automated in time, and then computers will seem no more clumsy at manipulating the real world than Chess AI currently seems clumsy at chess. That is, it will seem inhumanly deft.

If we ended up in a circumstance like that, with the world’s activity mostly being coordinated by systems with no regard for humanity, my guess is that eventually humanity ends.

A different way of thinking of all this is to say that if machines become more competent decision-makers than humans, there will be strong incentives to give them control over all sorts of decisions.

And if their decision-making doesn’t reflect what we actually care about long term, then the future may not either. And we may have few options for regaining control, being powerfully resisted by the hungry systems we have created.

Katja Grace runs AI Impacts, a small multidisciplinary research group concerned with long-run risks from advanced artificial intelligence. The mission of the group is to answer decision-relevant questions about the future of artificial intelligence. Their 2016 survey of machine learning researchers was the 16th most discussed academic paper in 2017 according to Altmetric.

Photos: Ms Katja Grace and Shutterstock


Part Two of our chat with Katja will drop next week. In the meantime, why not check this article out?

How Will Medical Technology Evolve in the 2020s?

Support us!

All your donations will be used to pay the magazine’s journalists and to support the ongoing costs of maintaining the site.

 

paypal smart payment button for simple membership

Share this post

Interested in co-operating with us?

We are open to co-operation from writers and businesses alike. You can reach us on our email at cooperations@youthtimemag.com/magazine@youthtimemag.com and we will get back to you as quick as we can.

Where to next?

The Importance of Student Government in Education

Student government refers to a student-led organization within a school or university that is responsible for representing the interests, concerns, and needs of the student body to school administration and…

The Cuban Embargo is SO Last Century

In February 1962, President John F. Kennedy proclaimed an embargo on trade between the United States and Cuba. What is this embargo, and what is the current status of said…