![]() A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate. Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses. The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models - largely “black boxes’ whose contents are closely held - to outside scrutiny.īut there is worry the companies won’t do enough. And between late 2017 and early 2018, spammers gamed Gmail’s AI-powered detection service four times. Moore, a former Google executive and Carnegie Mellon dean, says he dealt with attacks on Google search software more than a decade ago. ![]() The bulk of the industry “would not even know it happened,” they wrote.Īndrew W. Surveying more than 80 organizations, the authors found the vast majority had no response plan for a data-poisoning attack or dataset theft. Hyrum Anderson and Ram Shankar Siva Kumar, who red-teamed AI while colleagues at Microsoft, call the state of AI security for text- and image-based models “pitiable” in their new book “Not with a Bug but with a Sticker.” One example they cite in live presentations: The AI-powered digital assistant Alexa is hoodwinked into interpreting a Beethoven concerto clip as a command to order 100 frozen pizzas. Then they bought the domains and posted bad data on them. The researchers waited for a handful of websites used in web crawls for two models to expire. Researchers have found that “poisoning” a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc - and be easily overlooked.Ī study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01% of a model was enough to spoil it - and cost as little as $60. That interaction can alter them in unexpected ways. And chatbots are especially vulnerable because we interact with them directly in plain language. ![]() Too much is at stake and, in the absence of regulation, “people can sweep things under the rug at the moment and they’re doing so,” said Bonner.Īttacks trick the artificial intelligence logic in ways that may not even be clear to their creators. Serious hacks, regularly reported just a few years ago, are now barely disclosed. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and “with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.”
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |