Surveillance of the hive mind is the future of control organization. Human behavior will be shaped and modified learning for the sake of commercial outcomes not democracy.
New Logic – The Google Data Model was entered, by biased white privileged white boys nothing to do with centuries of business for real people. Monitizing Your Data (the user) is their revenue stream. Everything about you – data from your house, car, what you shop, all financial info.
The product you need to buy is just a loss leader for behavioral data. They can give it away – it’s a supply chain interface.
After Words with Shoshana Zuboff
Shoshana Zuboff, author of The Age of Surveillance Capitalism, talked about the growing business of collecting and selling consumer data. She was interviewed by Nilay Patel, editor-in-chief of The Verge. Prediction Products will become a new derivative.
2:39 This goes beyond our industrial mass society capitalism https://www.c-span.org/video/?456637-1/after-words-shoshana-zuboff
The Two Codes Your Kids Need to Know
The College Board came up with a surprising conclusion about keys to success for college and life. https://www.nytimes.com/2019/02/12/opinion/college-board-sat-ap.html
“Their short answer was that if you want to be an empowered citizen in our democracy — able to not only navigate society and its institutions but also to improve and shape them, and not just be shaped by them — you need to know how the code of the U.S. Constitution works. And if you want to be an empowered and adaptive worker or artist or writer or scientist or teacher — and be able to shape the world around you, and not just be shaped by it — you need to know how computers work and how to shape them.”
New York writes new rules to rein in government by algorithm
Opaque software is already being used to sentence criminals and rate schoolteachers
<https://apolitical.co/solution_article/new-york-writes-new-rules-to-rein-in-government-by-algorithm/>
excerpt
The use of robot advisors in government is no longer a novelty. For years public servants have used computational tools — “automated decision systems” — to advise and guide their decisions. These programs process masses of data to assess the likelihood of certain events occurring, or to measure performance. Algorithms are used to judge where crime is likely to occur,
<https://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-los-angeles-police-data-analysis-algorithm-minority-report>
what the chances are of convicted criminals reoffending,
<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>
or to create scores to measure teacher performance in schools. Although at face value using computers as advisors might sound like it removes human prejudice from judgements, critics fear that many of these automated decision systems contain inbuilt biases. A 2016 ProPublica investigation,
<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>
for example, found that the algorithms used in the courts of many US states are more likely to rate black people at higher risk of repeat offending. Now, a group of academics and public policy researchers in New York has established a set of guidelines to prevent the encoding of biases in algorithmic decision-making tools before they are used. This follows the city’s decision in December 2017 to establish a dedicated team to look at making the use of algorithms more transparent, and builds on AI Now’s work lobbying the city to adopt stronger rules for accountability.
Comments are closed.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.AcceptRejectRead More
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
Hurrah! At last I got a weblog from where I be able to genuinely obtain useful data regarding my study
and knowledge.
SEE http://K12PlayGround.com
Surveillance of the hive mind is the future of control organization. Human behavior will be shaped and modified learning for the sake of commercial outcomes not democracy.
New Logic – The Google Data Model was entered, by biased white privileged white boys nothing to do with centuries of business for real people. Monitizing Your Data (the user) is their revenue stream. Everything about you – data from your house, car, what you shop, all financial info.
The product you need to buy is just a loss leader for behavioral data. They can give it away – it’s a supply chain interface.
After Words with Shoshana Zuboff
Shoshana Zuboff, author of The Age of Surveillance Capitalism, talked about the growing business of collecting and selling consumer data. She was interviewed by Nilay Patel, editor-in-chief of The Verge. Prediction Products will become a new derivative.
2:39 This goes beyond our industrial mass society capitalism
https://www.c-span.org/video/?456637-1/after-words-shoshana-zuboff
THE MONOPOLY-BUSTING CASE AGAINST GOOGLE, AMAZON, UBER, AND FACEBOOK
What tech companies have to fear from antitrust law By Russell Brandom
@russellbrandom Sep 5, 2018, 8:14am EDT Illustrations by William Joel
https://www.theverge.com/2018/9/5/17805162/monopoly-antitrust-regulation-google-amazon-uber-facebook
The Two Codes Your Kids Need to Know
The College Board came up with a surprising conclusion about keys to success for college and life.
https://www.nytimes.com/2019/02/12/opinion/college-board-sat-ap.html
“Their short answer was that if you want to be an empowered citizen in our democracy — able to not only navigate society and its institutions but also to improve and shape them, and not just be shaped by them — you need to know how the code of the U.S. Constitution works. And if you want to be an empowered and adaptive worker or artist or writer or scientist or teacher — and be able to shape the world around you, and not just be shaped by it — you need to know how computers work and how to shape them.”
New York writes new rules to rein in government by algorithm Opaque software is already being used to sentence criminals and rate schoolteachers
https://apolitical.co/solution_article/new-york-writes-new-rules-to-rein-in-government-by-algorithm/
New York writes new rules to rein in government by algorithm
Opaque software is already being used to sentence criminals and rate schoolteachers
<https://apolitical.co/solution_article/new-york-writes-new-rules-to-rein-in-government-by-algorithm/>
excerpt
The use of robot advisors in government is no longer a novelty. For years public servants have used computational tools — “automated decision systems” — to advise and guide their decisions. These programs process masses of data to assess the likelihood of certain events occurring, or to measure performance. Algorithms are used to judge where crime is likely to occur,
<https://www.theguardian.com/cities/2014/jun/25/predicting-crime-lapd-los-angeles-police-data-analysis-algorithm-minority-report>
what the chances are of convicted criminals reoffending,
<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>
or to create scores to measure teacher performance in schools. Although at face value using computers as advisors might sound like it removes human prejudice from judgements, critics fear that many of these automated decision systems contain inbuilt biases. A 2016 ProPublica investigation,
<https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing>
for example, found that the algorithms used in the courts of many US states are more likely to rate black people at higher risk of repeat offending. Now, a group of academics and public policy researchers in New York has established a set of guidelines to prevent the encoding of biases in algorithmic decision-making tools before they are used. This follows the city’s decision in December 2017 to establish a dedicated team to look at making the use of algorithms more transparent, and builds on AI Now’s work lobbying the city to adopt stronger rules for accountability.