DE | EN
Sitemap | Impressum
web2.0 Diaspora Vimeo taz We at Mastodon A-FsA Song RSS Twitter Youtube Tumblr Flickr Wikipedia Donation Facebook Bitmessage Betterplace Tor-Netzwerk https-everywhere
24.04.2017 Neuronale KI beurteilt Menschen

Neural Network Learns to Identify Criminals by Their Faces

Is this rassism?
Cesare Lombroso (1835–1909), Italian criminologist believed that criminals were “throwbacks” more closely related to apes than law-abiding citizens. He was convinced he could identify them by ape-like features such as a sloping forehead, unusually sized ears and various asymmetries of the face and long arms.

Later on after statistically analyzed the data relating to physical abnormalities in criminals versus noncriminals one concluded that there was no statistical difference. The debate rested until 2011, when a group of psychologists from Cornell University showed that people were actually quite good at distinguishing criminals from noncriminals just by looking at photos of them.

Now we hear of the work of Xiaolin Wu and Xi Zhang from Shanghai Jiao Tong University in China. They take ID photos of 1856 Chinese men between the ages of 18 and 55 with no facial hair. Half of these men were criminals. They then used 90 percent of these images to train a convolutional neural network to recognize the difference and then tested the neural net on the remaining 10 percent of the images. The neural network could correctly identify criminals and noncriminals with an accuracy of 89.5%.

Although controversial, that result is not entirely unexpected. If humans can spot criminals by looking at their faces, as psychologists found in 2011, it should come as no surprise that machines can do it, too. Police might want to have such a program to test ist with ID photos of the hole population of their country.

That’s a kind of Minority Report scenario in which law-breakers could be identified before they had committed a crime.

Critical in the Chinese research will be the low amount of under 2000 photos, the few criterias as  the curvature of upper lip, the distance between two inner corners of the eyes,  and the angle between two lines drawn from the tip of the nose to the corners of the mouth. Additionally it is important wheter the photos were taken before or after their conviction. Their living standard and their living circumstances might have an influence on their behavior at the time the photo was taken.

Frightening at all is a new era of automatic anthropometry. A short ime ago, researchers revealed how they had trained a deep-learning machine to judge in the same way as humans whether somebody was trustworthy by looking at a snapshot of their face. And machines will soon be able to study movement, too. That raises the possibility of studying how we move, how we interact ...

 

Read more https://www.technologyreview.com/s/602955/neural-network-learns-to-identify-criminals-by-their-faces/

All articles about


Kommentar: RE: 20170424 Neuronale KI beurteilt Menschen

Ha - endlich eine Methode um Zuerlässigkeit von Politiker BlaBla zur Redezeit zu interpretieren, das ergibt ja die Chance ganz neuer Interview Formate

L., 24.04.2017 10:52


RE: 20170424 Neuronale KI beurteilt Menschen

gruselig

O., 24.04.2017 11:09


RE: 20170424 Neuronale KI beurteilt Menschen

Sehr!
Das ist das Schreckensszenario: Verhaftet und verurteilt, weil das Gesicht nicht passt...

St., 24.04.207 12:22


RE: 20170424 Neuronale KI beurteilt Menschen

Gruselig auf doppelte Weise. Erstens ist es gruselig, dass in der Zukunft eventuell Computer dazu beitragen werden, dass Menschen ungleich nach ihrem Aussehen behandelt werden. Zweitens ist es gruselig, dass die Computer eine 90%ige Trefferquote aufweisen können.
Und es stellen sich ganz viele Folgefragen:
* Wollen diese 90% der Verbrecher eventuell gar als Verbrecher erkannt werden?
* Welche 90% der Verbrecher kann der Computer erkennen? Nur die Drogendealer und Zuhälter, die vermutlich schon auf andere Weise vom Leben gezeichnet sind, oder sogar gut situierte Steuerhinterzieher wie Uli Hoeneß? Ist der Algorithmus also sogar gut darin, Unehrlichkeit von Menschen abzulesen?
* Falls sie tatsächlich gut funktioniert, ist es dann eher sinnvoll, diese Technologie als ethisch unvereinbar komplett zu verbieten, oder sollte sie doch auf gewisse Weise genutzt werden, um Verbrechen zu verhindern und die betreffenden Menschen zum Beispiel durch gezielte Förderprogramme in ein ehrliches Leben zu bringen?

Fu., 24.04.2017 12:34


Category[21]: Unsere Themen in der Presse Short-Link to this page: a-fsa.de/e/2LM
Link to this page: https://www.aktion-freiheitstattangst.org/de/articles/6004-20170424-neuronale-ki-beurteilt-menschen.html
Link with Tor: http://a6pdp5vmmw4zm5tifrc3qo2pyz7mvnk4zzimpesnckvzinubzmioddad.onion/de/articles/6004-20170424-neuronale-ki-beurteilt-menschen.html
Tags: #Biometrie #Gesichtserkennung #MinorityReport #KI #Grundrechte #Menschenrechte #Freizügigkeit #Unschuldsvermutung #Verhaltensänderung #Indect #Fingerabdruck #ElektronischerPersonalausweis #ElektronischerPass #Persönlichkeitsrecht #Privatsphäre
Created: 2017-04-24 08:27:43
Hits: 1697

Leave a Comment

If you like a crypted answer you may copy your
public key into this field. (Optional)
To prevent the use of this form by spam robots, please enter the portrayed character set in the left picture below into the right field.

CC license   European Civil Liberties Network   Bundesfreiwilligendienst   We don't store your data   World Beyond War   Use Tor router   Use HTTPS   No Java   For Transparency

logos Mitglied im European Civil Liberties Network Creative Commons Bundesfreiwilligendienst We don't store user data World Beyond War Tor - The onion router HTTPS - use encrypted connections We don't use JavaScript For transparency in the civil society