Three years ago, Dana DeBeauvoir, a county clerk in Austin, Texas, had a problem. Soon she’d have to replace the aging voting machines her county had bought eight years earlier. Congress had ponied up the money for those machines, driven by the hanging chad debacle in Florida’s 2000 election. But this time, the feds weren’t coughing up any cash.
Even if she had the money, though, she didn’t like her choices. Computer scientists had been sounding alarms about the rampant security flaws in voting machines for years, and the manufacturers hadn’t responded. So DeBeauvoir took a very unusual step: She gave the keynote speech at a computer voting security conference, challenging the assembled computer scientists to build her the voting system of her dreams.
She outlined four requirements. First, the system had to use inexpensive, off-the-shelf hardware. Second, voters had to know that their votes were counted accurately and that the election outcome was correct. Third, voter privacy had to be protected — in particular, vote-selling had to be impossible, allowing no way for a voter to show anyone else their vote. And finally,
From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?
In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.
Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?
At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.
In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.
The results show that humans are no longer the only judges of creativity. Computers can perform the
There’s been a literal firestorm in recent years on the proper meaning of “literally” — including the uproar over its non-literal opposite meaning being added to respected dictionaries.
Language is funny that way. We say things that are utterly false, but we seem to understand what the other person means, regardless. Intrigued by this quirk in communication, researchers built the first computational model that can predict humans’ interpretations of hyperbolic statements. (Literally.)
Separating literal from figurative speech is actually quite complicated. A proper interpretation of a statement depends on shared knowledge between speaker and listener, the ease of communication and knowledge of a speaker’s intentions. It’s relatively easy for humans to do this in an instant, but computational models aren’t as adept at identifying non-literal speech.
Researchers from Stanford and MIT set out to create a program that could. They began by asking 340 individuals, recruited through Amazon’s Mechanical Turk, to determine whether a series of statements were literal or hyperbolical. The statements described the prices of an electric kettle, a watch and a laptop. For example, “The laptop cost ten thousand dollars.”
The results seemed intuitive: A statement claiming the kettle cost $10,000 was viewed as hyperbolic, but a price tag of $50 was
Search engines have come to define how most of us interact with digital information. But, if you think about it, they’re still pretty limited. We can search for words and, in recent years, Google Images allows us to search by picture. Want to search, though, for the flavor of apple, or the notes of the song you can’t remember the name to? You’re still out of luck.
However, researchers are making headway in another kind of novel search — searching by 3-D object. And that’s only going to become more useful in a world with growing access to 3-D printers.
Printing the Future
From candy to jawbones to high heels, 3-D printing has finally found its way into the mainstream. With Maker Labs gracing public libraries, museums and schools, the everyday person has access to a technology that was once only reserved for industry use.
The new website 3Dshap.es aims to collect all these 3-D files on the web in one place, allowing users to search for specific shapes and file types. The website, which is still in beta, is the work of a team of innovators at the UK-based 3DIndustri.es, including CEO Seena Rejal, and Michael Groenendyk, business librarian and researcher at Concordia University
The builders of mobile gadgets face a paradox. They want to make the most powerful device they can, squeezed into the smallest box possible. But for a device to be useful, human beings have to be able to interact with all its features. More and more functions mean more and more buttons—and humans have stubbornly remained the same size and shape. A button can be made only so small before it becomes impossible to press, putting a tough limit on miniaturization. Different devices confront this paradox in different ways: Cell phone keypad buttons routinely do double, triple, and even quadruple duty, while devices like tablet computers use touch screens and gesture recognition.
AT&T is developing another solution. It wants you to be able simply to talk to an electronic device and have it follow your instructions. While some cell phones already offer voice recognition for basic tasks, such as looking up phone numbers in a contact list, AT&T envisions devices that can handle much more complicated voice commands, such as “Tell me where I can find the nearest ATM” or “Order me a pepperoni pizza.”
For decades AT&T has been working on a voice recognition system that can handle just such requests.