Building a More Secure Electronic Voting System

Three years ago, Dana DeBeauvoir, a county clerk in Austin, Texas, had a problem. Soon she’d have to replace the aging voting machines her county had bought eight years earlier. Congress had ponied up the money for those machines, driven by the hanging chad debacle in Florida’s 2000 election. But this time, the feds weren’t coughing up any cash.

Even if she had the money, though, she didn’t like her choices. Computer scientists had been sounding alarms about the rampant security flaws in voting machines for years, and the manufacturers hadn’t responded. So DeBeauvoir took a very unusual step: She gave the keynote speech at a computer voting security conference, challenging the assembled computer scientists to build her the voting system of her dreams.

She outlined four requirements. First, the system had to use inexpensive, off-the-shelf hardware. Second, voters had to know that their votes were counted accurately and that the election outcome was correct. Third, voter privacy had to be protected — in particular, vote-selling had to be impossible, allowing no way for a voter to show anyone else their vote. And finally,

The Graffiti Code Breaker

graffiti-wallpaperTo mark their territory and warn off rivals, 21st-century gangsters still depend on the street language of graffiti. “Graffiti is a big part of how gangs tell their story and pick their turf,” says Steven Schafer, a detective in the criminal gang unit of the Indianapolis Metropolitan Police Department. A new software program called GARI (Gang Graffiti Automatic Recognition and Interpretation) is now helping Schafer and other investigators decipher the scrawlings, monitor gang activity, and fight crime.

GARI connects officers in the field with a searchable database of graffiti information and images snapped by cell phones and digital cameras. An officer can take a photo and submit it to an app, which tags it with location, date, and time. The software also scans the graffiti for distinguishing features, including color and shape. Officers can then enter queries into GARI to check for similar images logged within a certain area and derive local gang affiliation, territorial disputes, and even the identity of the members who left their mark.

Because GARI is so new, Schafer and his team must manually tag many of the submitted photos to

Teach Creativity to a Computer

From Picasso’s “The Young Ladies of Avignon” to Munch’s “The Scream,” what was it about some paintings that arrested people’s attention upon viewing them, that cemented them in the canon of art history as iconic works?

In many cases, it’s because the artist incorporated a technique, form or style that had never been used before. They exhibited a creative and innovative flair that would go on to be mimicked by artists for years to come.

Throughout human history, experts have often highlighted these artistic innovations, using them to judge a painting’s relative worth. But can a painting’s level of creativity be quantified by Artificial Intelligence (AI)?

At Rutgers’ Art and Artificial Intelligence Laboratory, my colleagues and I proposed a novel algorithm that assessed the creativity of any given painting, while taking into account the painting’s context within the scope of art history.

In the end, we found that, when introduced with a large collection of works, the algorithm can successfully highlight paintings that art historians consider masterpieces of the medium.

The results show that humans are no longer the only judges of creativity. Computers can perform the

Diving Into the Data Literally

One winter evening in the early 1860s, German chemist August Kekulé dozed off while sitting before a fire, falling into a remarkably vivid dream. Atoms formed themselves into undulating strings that morphed into a snake eating its own tail. Kekulé contended that this intense imagery helped him solve the mystery of benzene’s ringlike structure, a discovery that is considered a foundation of modern chemistry.

Nearly 100 years later, research teams on both sides of the Atlantic were vying to be the first to decipher the structure of DNA, the genetic material that is the basic molecule of life. In the United States, Nobel laureate Linus Pauling found himself up against obscure English physicist Francis Crick and his 20-something American postdoc, James Watson, in Cambridge. The upstart British team had a hidden advantage: crystallographic X-rays of DNA taken by colleague Rosalind Franklin. This chemically enhanced X-ray technique revealed that DNA was composed of two complementary strands of nucleic acids linked by chemical bonds on a ladder-like chain. The ability to visualize DNA gave them insights into the spiral double-helix structure — and they won the race.

In 1993, Kary Mullis won the Nobel for his invention of polymerase chain

This Computer Knows When Literally Is not Literal

There’s been a literal firestorm in recent years on the proper meaning of “literally” — including the uproar over its non-literal opposite meaning being added to respected dictionaries.

Language is funny that way. We say things that are utterly false, but we seem to understand what the other person means, regardless. Intrigued by this quirk in communication, researchers built the first computational model that can predict humans’ interpretations of hyperbolic statements. (Literally.)

Modeling Exaggeration

Separating literal from figurative speech is actually quite complicated. A proper interpretation of a statement depends on shared knowledge between speaker and listener, the ease of communication and knowledge of a speaker’s intentions. It’s relatively easy for humans to do this in an instant, but computational models aren’t as adept at identifying non-literal speech.

Researchers from Stanford and MIT set out to create a program that could. They began by asking 340 individuals, recruited through Amazon’s Mechanical Turk, to determine whether a series of statements were literal or hyperbolical. The statements described the prices of an electric kettle, a watch and a laptop. For example, “The laptop cost ten thousand dollars.”

The results seemed intuitive: A statement claiming the kettle cost $10,000 was viewed as hyperbolic, but a price tag of $50 was

In the Future We will Search in Three Dimensions

Search engines have come to define how most of us interact with digital information. But, if you think about it, they’re still pretty limited. We can search for words and, in recent years, Google Images allows us to search by picture. Want to search, though, for the flavor of apple, or the notes of the song you can’t remember the name to? You’re still out of luck.

However, researchers are making headway in another kind of novel search — searching by 3-D object. And that’s only going to become more useful in a world with growing access to 3-D printers.

Printing the Future

From candy to jawbones to high heels, 3-D printing has finally found its way into the mainstream. With Maker Labs gracing public libraries, museums and schools, the everyday person has access to a technology that was once only reserved for industry use.

The new website 3Dshap.es aims to collect all these 3-D files on the web in one place, allowing users to search for specific shapes and file types. The website, which is still in beta, is the work of a team of innovators at the UK-based 3DIndustri.es, including CEO Seena Rejal, and Michael Groenendyk, business librarian and researcher at Concordia University

Talk to your gadgets

The builders of mobile gadgets face a paradox. They want to make the most powerful device they can, squeezed into the smallest box possible. But for a device to be useful, human beings have to be able to interact with all its features. More and more functions mean more and more buttons—and humans have stubbornly remained the same size and shape. A button can be made only so small before it becomes impossible to press, putting a tough limit on miniaturization. Different devices confront this paradox in different ways: Cell phone keypad buttons routinely do double, triple, and even quadruple duty, while devices like tablet computers use touch screens and gesture recognition.

AT&T is developing another solution. It wants you to be able simply to talk to an electronic device and have it follow your instructions. While some cell phones already offer voice recognition for basic tasks, such as looking up phone numbers in a contact list, AT&T envisions devices that can handle much more complicated voice commands, such as “Tell me where I can find the nearest ATM” or “Order me a pepperoni pizza.”

For decades AT&T has been working on a voice recognition system that can handle just such requests.