One day last summer, Microsoft’s director of artificial intelligence research, Eric Horvitz, activated the Autopilot function of his Tesla sedan. The car steered itself down a curving road near Microsoft’s campus in Redmond, Washington, freeing his mind to better focus on a call with a nonprofit he had cofounded around the ethics and governance of AI. Then, he says, Tesla’s algorithms let him down.
“The car didn’t center itself exactly right,” Horvitz recalls. Both tires on the driver’s side of the vehicle nicked a raised yellow curb marking the center line, and shredded. Horvitz had to grab the wheel to pull his crippled car back into the lane. He was unharmed, but the vehicle left the scene on the back of a truck, with its rear suspension damaged. Its driver left affirmed in his belief that companies deploying AI must consider new ethical and safety challenges.
At Microsoft, Horvitz helped establish an internal ethics board in 2016 to help the company navigate potentially tricky spots with its own AI technology. The group is cosponsored by Microsoft’s president and most senior lawyer, Brad Smith. It has prompted the company to refuse business from corporate customers, and to attach conditions to some deals limiting the use of its technology.
Horvitz declined to provide details of those incidents, saying only that they typically involved companies asking Microsoft to build custom AI projects. The group has also trained Microsoft sales teams on applications of AI the company is wary of. And it helped Microsoft improve a cloud service for analyzing faces that a research paper revealed was much less accurate for black women than white men. “It’s been heartening to see the engagement by the company and how seriously the questions are being taken,” Horvitz says. He likens what’s happening at Microsoft to an earlier awakening about computer security—saying it too will change how every engineer works on technology.
Many people are now talking about the ethical challenges raised by AI, as the technology extends into more corners of life. French President Emmanuel Macron recently told WIRED that his national plan to boost AI development would consider setting “ethical and philosophical boundaries.” New research institutes, industry groups, and philanthropic programs have sprung up.
Microsoft is among the smaller number of companies building formal ethics processes. Even some companies racing to reap profits from AI have become worried about moving too quickly. “For the past few years I’ve been obsessed with making sure that everyone can use it a thousand times faster,” says Joaquin Candela, Facebook’s director of applied machine learning. But as more teams inside Facebook use the tools, “I started to become very conscious about our potential blind spots.”
At Facebook’s annual developer conference this month, data scientist Isabel Kloumann described a kind of automatic adviser for the company’s engineers called Fairness Flow. It measures how machine-learning software analyzing data performs on different categories—say men and women, or people in different countries—to help expose potential biases. Research has shown that machine-learning models can pick up or even amplify biases against certain groups, such as women or Mexicans, when trained on images or text collected online.
Kloumann’s first users were engineers creating a Facebook feature where businesses post recruitment ads. Fairness Flow’s feedback helped them choose job recommendation algorithms that worked better for different kinds of people, she says. She is now working on building Fairness Flow and similar tools into the machine-learning platform used company-wide. Some data scientists perform similar checks manually; making it easier should make the practice more widespread. “Let’s make sure before launching these algorithms that they don’t have a disparate impact on people,” Kloumann says. A Facebook spokesperson said the company has no plans for ethics boards or guidelines on AI ethics.
‘Let’s make sure before launching these algorithms that they don’t have a disparate impact on people.’
Isabel Kloumann, Facebook
Google, another leader in AI research and deployment, has recently become a case study in what can happen when a company doesn’t seem to adequately consider the ethics of AI.
Last week, the company promised that it would require a new, hyperrealistic form of its voice assistant to identify itself as a bot when speaking with humans on the phone. The pledge came two days after CEO Sundar Pichai played impressive—and to some troubling—audio clips in which the experimental software made restaurant reservations with unsuspecting staff.
Google has had previous problems with ethically questionable algorithms. The company’s photo-organizing service is programmed not to tag photos with “monkey” or “chimp” after a 2015 incident in which images of black people were tagged with “gorilla.” Pichai is also fighting internal and external critics of a Pentagon AI contract, in which Google is helping create machine-learning software that can make sense of drone surveillance video. Thousands of employees have signed a letter protesting the project; top AI researchers at the company have tweeted their displeasure; and Gizmodo reported Monday that some employees have resigned.
A Google spokesperson said the company welcomed feedback on the automated-call software—known as Duplex—as it is refined into a product, and that Google is engaging in a broad internal discussion about military uses of machine learning. The company has had researchers working on ethics and fairness in AI for some time but did not previously have formal rules for appropriate uses of AI. That’s starting to change. In response to scrutiny of its Pentagon project, Google is working on a set of principles that will guide use of its technology.
Some observers are skeptical that corporate efforts to imbue ethics into AI will make a difference. Last month, Axon, manufacturer of the Taser, announced an ethics board of external experts to review ideas such as using AI in policing products like body cameras. The board will meet quarterly, publish one or more reports a year, and includes a member designated as a point of contact for Axon employees concerned about specific work.
Soon after, more than 40 academic, civil rights, and community groups criticized the effort in an open letter. Their accusations included that Axon had omitted representatives from the heavily policed communities most likely to suffer the downsides of new police technology. Axon says it is now looking at having the board take input from a wider range of people. Board member Tracy Kosa, who works on security at Google and is an adjunct professor at Stanford, doesn’t see the episode as a setback. “I’m frankly thrilled about it,” she says, speaking independently of her role at Google. More people engaging critically with the ethical dimensions of AI is what will help companies get it right, Kosa says.
None have got it right so far, says Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics. “There aren’t any good examples yet,” he says when asked about the early corporate experiments with AI ethics boards and other processes. “There’s a lot of high-falutin talk but everything I’ve seen so far is naive in execution.”
Wallach says that purely internal processes, like Microsoft’s, are hard to trust, particularly when they are opaque to outsiders and don’t have an independent channel to a company’s board of directors. He urges companies to hire AI ethics officers and establish review boards but argues external governance such as national and international regulations, agreements, or standards will also be needed.
Horvitz came to a similar conclusion after his driving mishap. He wanted to report the details of the incident to help Tesla’s engineers. When recounting his call to Tesla, he describes the operator as more interested in establishing the limits of the automaker’s liability. Because Horvitz wasn’t using Autopilot as recommended—he was driving slower than 45 miles per hour—the incident was on him.
“I get that,” says Horvitz, who still loves his Tesla and its Autopilot feature. But he also thought his accident illustrated how companies pushing people to rely on AI might offer, or be required, to do more. “If I had a nasty rash or problems breathing after taking medication, there’d be a report to the FDA,” says Horvitz, an MD as well as computer science PhD. “I felt that that kind of thing should or could have been in place.” NHTSA requires automakers to report some defects in vehicles and parts; Horvitz imagines a formal reporting system fed directly with data from autonomous vehicles. A Tesla spokesperson said the company collects and analyzes safety and crash data from its vehicles, and that owners can use voice commands to provide additional feedback.
Liesl Yearsley, who sold a chatbot startup to IBM in 2014, says the embryonic corporate AI ethics movement needs to mature fast. She recalls being alarmed to see how her bots could delight customers such as banks and media companies by manipulating young people to take on more debt, or spend hours chatting to a piece of software.
The experience convinced Yearsley to make her new AI assistant startup, Akin, a public benefit corporation. AI will improve life for many people, she says. But companies seeking to profit by employing smart software will inevitably be pushed towards risky ground—by a force she says is only getting stronger. “It’s going to get worse as the technology gets better,” Yearsley says.
More Great WIRED Stories
- Adrian Weckler: 'Should tech firms build houses?
- Tech firm Bidooh sets its sights on cracking the US market
- Irish education system needs 'to wake up fast' to cope with future AI workplace
- Beijing Readies for Possible High-Tech Cold War
- Cybersecurity Firm Warns AI May 'Go to Dark Side' in 2019
- Israel to Fall Behind the World Unless Tel Aviv Pays Big for AI Systems – Report
- Google's AI will be so good it knows what you need before you ask
- Google-Developed AI Beats Two Human Pros in Strategy Game
- IBM’s AI machine loses debate against a human
- Pentagon’s 1st AI strategy vows to keep pace with Russia & China, wants help from tech
- Romanian firms could use pan-EU AI platform aiming to mobilise the entire European AI community
- The Big Tech Show: Reasons why Facebook (probably) isn’t listening through your phone
- Why sleep tech is going to be one of 2019's biggest trends
- Tech giants need to take more responsibility for the advertising that make them billions
- Tech giants should be held to compulsory 'code of ethics', MPs say
- Analyst: AI Gives New Impetus for China-Russia Ties
- The tech start-ups in TechNation's Upscale 4.0 accelerator programme
- New Year’s Eve rail strike planned in dispute over guards
- What if social media firms paid us?
- AI-Robots 'With Feelings' Could be Granted Human Rights, Scientists Believe
|Hello Kitty Easy to Read and Easy to Put On and Take Off with Velcro Wrap Band (check at Amazon)||4.5|
|CELL-TECH 8 pin to 30 pin Black Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (2 Pack) (check at Amazon)||2.6|
|CELL-TECH® 8 pin to 30 pin Black Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (2 Pack) (check at Amazon)||2.3|
|CELL-TECH® 30 pin to 8 pin Black Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (2 Pack) (check at Amazon)||1.9|
|CELL-TECH 8 pin to 30 pin White Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (2 Pack) (check at Amazon)||2.7|
|CELL-TECH® 30 pin to 8 pin White Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (5 Pack) (check at Amazon)||5.0|
|CELL-TECH® 30 pin to 8 pin White Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (1 Pack) (check at Amazon)||1.5|
|CELL-TECH® 30 pin to 8 pin Black Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (3 Pack) (check at Amazon)||1.9|
|CELL-TECH® 30 pin to 8 pin White Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (4 Pack) (check at Amazon)||2.6|
|Galaxy S6 Glass Screen Protector, Tech Armor Edge to Edge Ballistic Glass Samsung Galaxy S6 Screen Protector (White) [1-Pack] (check at Amazon)||0.0|
|CELL-TECH® 30 pin to 8 pin White Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (2 Pack) (check at Amazon)||1.0|
|CELL-TECH® 8 pin to 30 pin White Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (5 Pack) (check at Amazon)||0.0|
|CELL-TECH® 30 pin to 8 pin White Cable Adapter For Apple iPhone 6/6Plus, iPhone 5S/5C/ 5 4 4S iPod iPad (3 Pack) (check at Amazon)||0.0|
|Chuntihavi NEW Fashion Design Hard Skin Case Cover Shell for Mobile phone Apple Iphone 5 5S--Let's move to Paris (check at Amazon)||5.0|
|CELL-TECH® 8 pin to 30 pin Black Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (5 Pack) (check at Amazon)||0.0|
|CELL-TECH® 8 pin to 30 pin White Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (3 Pack) (check at Amazon)||0.0|
|Keep Calm and Move to Paris Customized Rubber White iphone 6 plus Case On Custom Service (check at Amazon)||2.0|
|CELL-TECH® 8pin Lightning to OTG USB Camera Adapter for iPad4 & iPad Mini & iPhone 5 (check at Amazon)||0.0|
|CELL-TECH® 8 pin to 30 pin White Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (2 Pack) (check at Amazon)||2.4|
|CELL-TECH® 8 pin to 30 pin White Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (4 Pack) (check at Amazon)||0.0|
|CELL-TECH® 8 pin to 30 pin Black Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (3 Pack) (check at Amazon)||0.0|
|CELL-TECH® 8 pin to 30 pin Black Adapter For Apple iPhone 6plus/6/5s/5c/5 4 4S iPod iPad (4 Pack) (check at Amazon)||0.0|
|Raphnet-tech N64/Gamecube to USB adapter with vibration/rumble support [video... (check at Amazon)||0.0|
|Raphnet-tech Gamecube controller to Wiimote adapter [Nintendo Wii] (check at Amazon)||0.0|
|Raphnet-tech N64 controller to Wiimote adapter (check at Amazon)||0.0|
|Teach Yourself Soccer Presents Derek Lawther's 20 Moves to Successful Soccer plus Bonus Features (check at Amazon)||2.0|
|Lucille Ball Specials: Lucy Moves to NBC (check at Amazon)||4.7|
|All The Right Moves - To Beat and Get Past Your Opponent (check at Amazon)||4.0|
|Moving to the Word: aerobic workout (check at Amazon)||4.8|
|Moving to Mozart With Ann Smith: Stretching, Flexiblility, Classical Music, Easy and Fun Exercise, Amadeus Mozart (check at Amazon)||3.1|
|Moving to the Word Strength and Flexibility Workout (check at Amazon)||5.0|
|chris freytag's just move to lose vol. 2 (check at Amazon)||5.0|
|Move To Lose with Chris Freytag (check at Amazon)||4.8|
|When I Move to Hallelujah Square (Bill Traylor's Southern Gospel Hour Series) (check at Amazon)||4.0|
|To the Gornergrat By Rail – Matterhorn (check at Amazon)||0.0|
|Ed Sullivan's Rock-n-roll Classics Move to the Music (check at Amazon)||5.0|
|What To Put In a Resume (check at Amazon)||1.0|
|Lets Move To The Country (check at Amazon)||5.0|
|Move to the Beat Dance Outfit -18" Dolls (check at Amazon)||4.7|
|Aeromax My 1st Career Gear Firefighter, Easy to put on shirt fits most ages 3 to 6 (check at Amazon)||2.7|
|Lcr/Left Center Right Is A Fun, Fast-Paced Dice Game That You Won'T Be Able To Put Down - Left Center Right Dice Game in Tin (check at Amazon)||5.0|
|Nerf Vortex Nitron Centerfire Tech Electronic Scope sight Green Light Tactical Rail (check at Amazon)||0.0|
|Tech Deck World Industries Quarter Pipe & Grind Rail Skatepark (check at Amazon)||0.0|
|Carrera Go!!!/Carrera Digital 143 Guard Rail Set (88110) (check at Amazon)||0.0|
|Tech Deck World Industries Mini Quarter pipe & Rail pipe (check at Amazon)||0.0|
|Hawks Tech 11mm Dovetail - 20mm Weaver Rifle Scope Rail Adapter. Airgun to Weaver (check at Amazon)||0.0|
|Making the Best of a Bad Decision: How to Put Your Regrets behind You, Embrace Grace, and Move toward a Better Future (check at Amazon)||0.0|
|Morality, Competition, and the Firm: The Market Failures Approach to Business Ethics (check at Amazon)||0.0|
|Making the Move to eLearning: Putting Your Course Online (check at Amazon)||0.0|
|Basketball: Learn How to Put Speed in Your Step, Do the Drills, and Master all the Moves (check at Amazon)||0.0|
Tech Firms Move to Put Ethical Guard Rails Around AI have 2599 words, post on www.wired.com at May 16, 2018. This is cached page on USA Breaking News. If you want remove this page, please contact us.