« Parents, Please Get Over Yourselves | Main | Overthinking the Dove Sketches Ad »

April 29, 2013

Sensor Driven Discrimination

Creepy, or just insanely irritating?

In 2011, Intel and Kraft teamed up to launch iSample kiosks that rely on an optical sensor to determine the age and sex of the shopper and then suggest products to serve him or her. The machine was initially used to market Temptations—a jelly-based dessert advertised as “the first Jell-O that's just for adults.” So, on detecting a child, the machine would ask them to step away. A similar vending machine in Japan relies on facial recognition technology to recommend drinks to different consumers: Men younger than 50 are recommended canned coffee drinks, while women in their 20s are offered tea. Right now, sensors could help automate simple, binary decisions—don't let youngsters borrow adult DVDs!—but it won't take long before they enable interventions of the more elaborate variety: Once our faces can be tied our social networking profiles, all sorts of other manipulations enter the picture. Discounts, yes—but there may also be situations in which our willingness to pay for something is clearly greater than the price we are charged by a dumb, sensorless machine. If the machine can predict those situations—by analyzing our social networking profile or querying the self-tracking app on our phone to find out just how thirsty we are—it can charge us exactly what we are willing to pay.

Bonus question: how does a machine go about "creating inequality"? Isn't the entire point of sensor-based marketing and pricing that we're already not-the-same?

Posted by Cassandra at April 29, 2013 07:55 AM

Trackback Pings

TrackBack URL for this entry:
http://www.villainouscompany.com/mt/mt-tb.cgi/4580

Comments

The machine doesn't create inequality, it just recognizes it. You know, like an IQ test--oh wait, we're not allowed to do those anymore, are we?

Posted by: CAPT Mongo at April 29, 2013 09:15 AM

"Right now, sensors could help automate simple, binary decisions"

It is the great failure of science fiction that it had automatons as the highest form of robot instead the lowest form of human. It was not the scientist and the robot that begat the automaton but the technocrat and the sociologist.

Posted by: George Pal at April 29, 2013 10:26 AM

I think of those traditional sort of shops where you haggle over prices, and the salesman tries to sort out what you'd most like to buy and what you're willing to pay. Is this just about automating that process -- making an automatic merchant?

Posted by: Grim at April 29, 2013 11:19 AM

Is this just about automating that process -- making an automatic merchant?

Absolutely, but in place of a human mind capable of weighing complex factors and arriving at some judgment (possibly), we have a sensor doing what I can only describe as profiling or stereotyping.

How progressive!

Posted by: Cass at April 29, 2013 11:27 AM

My concern might be the opposite one -- not that it can't do it well enough, but that it could eventually do it better (i.e., more accurately determining what we want and what we will pay). One of the jobs I might have thought safe from automation was this kind of salesman. I figured robots would eventually do all of our manufacturing, but we could still find jobs for low-skilled (but personable) people in sales.

It's probably better for employers, because the kind of people who are difficult to educate are usually also difficult to employ. Still, if this kind of people aren't going to be wards of the state, there has to be work for them. The more the market has no need for them, the more we end up having to take care of them on the public's dime.

Regarding your concern, I wonder if people would be less offended by being profiled automatically according to market research, in the way that we are less offended by being profiled in a similar way by actuaries? We get really mad when some person makes a judgment about us based on obvious factors like skin color or age, but when the market sets rates according to data we take it as a kind of dispassionate assessment. We may not like the judgment of the market (especially in terms of setting rates for insurance as we get older), but we don't really object to it in the same way.

Posted by: Grim at April 29, 2013 01:16 PM

I wonder if people would be less offended by being profiled automatically according to market research, in the way that we are less offended by being profiled in a similar way by actuaries? We get really mad when some person makes a judgment about us based on obvious factors like skin color or age, but when the market sets rates according to data we take it as a kind of dispassionate assessment.

A while back, Google profiling informed me that based on their state-of-the-art tracking of my every online move (or idle search) that I am an upper-income, well educated 50-60 year old *man* whose interests are:

- sports
- economics
- some other inane stuff that I'm not particularly interested in and have since forgotten

I had to laugh, because most of my online shopping searches of late involve oriental rugs, landscaping and home improvement ideas, etc. Looking at the catalogs that are sent to our house addressed to me, you'd think we winter in Gstaadt and summer somewhere I can't pronounce or afford :p

They got the economics part right, but I spend an equal amount of time reading anything having to do with relationships (parent/child, marriage, transgendered Arctic wolf/enlightened Humyn). But there was nary a hint of any such outdated gender-stereotypical fodder.

I'm thinking they have some work to do on their algorithms :p

Posted by: Cass at April 29, 2013 01:33 PM

So, does this mean that if the machine determines that I'm overweight, will it not allow me to purchase that double fudge brownie? How about body scanner technology to determine blood pressure, heart rate, etc. as well as facial recognition? "Shirley," says the Nanny State, "if its saves the life of one child....."
No.
Thank.
You.

Posted by: DL Sly at April 29, 2013 02:00 PM

A census worker tried to analyze me once. I ate his liver with some fava beans and a nice Chianti.

Posted by: Texan99 at April 29, 2013 02:13 PM

Like with errors, sensors don't discriminate.

Programmers discriminate. Whatever you do, don't let Sheldon Cooper work on the one selling Egg Salad Sandwiches.

Expect a rash of new regulations.

Posted by: Yu-Ain Gonnano at April 29, 2013 02:58 PM

What will really be great is when they find a way to tie this new innovation into people's Facebook accounts!

Posted by: Murk Zuckerborg at April 29, 2013 04:56 PM

Oh, good. I'm safe then.

Posted by: Yu-Ain Gonnano at April 29, 2013 05:14 PM

You and me both :p

Yet another reason to hate and distrust Facebook!

Posted by: Cass at April 29, 2013 05:23 PM

So, does this mean that if the machine determines that I'm overweight, will it not allow me to purchase that double fudge brownie? How about body scanner technology to determine blood pressure, heart rate, etc. as well as facial recognition? "Shirley," says the Nanny State, "if its saves the life of one child....." No.Thank.You.

Dark Lord, you ignorant slut. I'll be the judge of what's good for you. DON'T YOU DARE QUESTION MAH AUTHORITEH!

Posted by: Mayor Bloomberg at April 29, 2013 05:24 PM

Well, Google's kind of the opposite of what I was thinking about. They get to interact with you directly, so it's not profiling, it's an assessment (apparently a poor one, so far) based on what you've told them that you're interested in.

Amazon is good at that kind of thing already. They know what you've bought, and can see what other people who have bought those things also tend to buy, and so they can make recommendations that are pretty good. They sometimes even send me free song downloads, not so much in the hope that I'll buy more, but in order to build a similar network for music such as they have for books. (Pandora has a different model, but also is pretty good at predicting what you might like from actual regular interactions with you.)

What I'm wondering about is whether you could learn from marketing data that (say) people of sex Y and race X tend to make use of product Z, and for about price W in the local area code where the interaction was taking place. That's not based on any real knowledge about the individual, just profiling.

Is it as offensive as a salesman making a profiling assumption, based on his lifelong experience in the area? I think maybe we wouldn't be as offended if it came from a machine, because of a program built from dispassionate marketing data, rather than from a person to whom we might attribute racist motives.

On the other hand, it's probably a needless consideration. Soon the facial recognition technology will be so good that every machine in the world will know exactly who you are, and be able to pull all that individual data about you from Google and Amazon.

Posted by: Grim at April 29, 2013 10:16 PM

Is it as offensive as a salesman making a profiling assumption, based on his lifelong experience in the area? I think maybe we wouldn't be as offended if it came from a machine, because of a program built from dispassionate marketing data, rather than from a person to whom we might attribute racist motives.

Well, I was being somewhat flippant before, but there was actually a point in there.

Machines don't profile. That is true. But it's also like saying computers don't make mistakes. They don't. And yet, things managed by computers get screwed up all the time.

That's because, while computers don't make mistakes, programmers do.

Programs making predictions based on dispassionate marketing data don't spring up out of the ether. They are written by "a person to whom we might attribute racist motives".

And right, wrong, or indifferent, the regulators start with the assumptions that you are a bigotted bastard and make you prove otherwise. Notice that Amazon, Google, Pandora never ask your gender, race, marital status, etc. They *might* ask you your age to confirm you are an adult, but that's it.

Posted by: Yu-Ain Gonnano at April 30, 2013 10:04 AM