Hello TQers,

MPR News has decided to discontinue Today’s Question. We’ve decided to focus on other areas.

I want to thank you all for participating in the wide range of questions that we’ve posed here. Your spirited debates have helped inform our coverage and present new angles of stories.

MPR will continue to rely on your perspective and questions to guide our reporting through the Public Insight Network. If you don’t have an account you can create one now for free.

We also have some spirited discussions on our Facebook Page and on our Twitter posts.

Some active queries you might be interested in:
Who are you supporting for president in 2016?
What is your view of the Jamar Clark outcome?
What do you want to know about water and water pollution in Minnesota?

Thanks again for the debates.

All the best,

giphy (12)

5 comments • Add yours →
Ideal Conceal gun folds up to look like a smartphone.  (Photo: Ideal Conceal)

As Minnesota lawmakers consider banning cellphone cases that look like handguns, a Minnesota man is making guns that look like cellphones.

The gun maker, Kirk Kjellberg of Monticello, told KARE11 that he’s grown tired of people staring at him when he is wearing his gun.

“I walked towards the restroom and a little child, a boy about 7, saw me and said, ‘Mommy, mommy, that guy’s gotta gun,'” he said. “The whole restaurant of course turns and stares at you and I thought, ‘There’s just gotta be something better to do than this.'”

The gun, as advertised on his website, is designed to look just like a smartphone — “so your new pistol will easily blend in with today’s environment.”

Kjellberg said the prototype will be done in June and will likely be manufactured in October. So far, he said he’s had plenty of interest — more than 4,000 requests, he claims, including from law enforcement.

Today’s Question: Should gun makers be allowed to create handguns that look like cellphones?

679 comments • Add yours →

“Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions,” writes Sarah Perez at TechCrunch.

In case you missed it, Tay is an A.I. project built by the Microsoft Technology and Research and Bing teams, in an effort to conduct research on conversational understanding. That is, it’s a bot that you can talk to online. The company described the bot as “Microsoft’s A.I. fam the internet that’s got zero chill!”, if you can believe that.

Tay is able to perform a number of tasks, like telling users jokes, or offering up a comment on a picture you send her, for example. But she’s also designed to personalize her interactions with users, while answering questions or even mirroring users’ statements back to them.

As Twitter users quickly came to understand, Tay would often repeat back racist tweets with her own commentary. What was also disturbing about this, beyond just the content itself, is that Tay’s responses were developed by a staff that included improvisational comedians. That means even as she was tweeting out offensive racial slurs, she seemed to do so with abandon and nonchalance.

Today’s Question: What is your reaction to Tay, Microsoft’s AI chat-bot that learned to be racist?

22 comments • Add yours →