Philip Agre predicted the dark side of the Internet 30 years ago. Why did no one listen?
Philip Agre, a computer scientist turned humanities professor, was prescient about many of the ways technology would impact the world https://www.washingtonpost.com/technology/2021/08/12/philip-agre-ai-disappeared/
1996
Early 90’s Philip Agre reviewed my site, giving it a thumbs up and encouraged my work. This was published on the Educational CyberPlayGround, Inc. http://www.edu-cyberpg.com
Phil Agre: How to Help Someone Use A Computer. 1996
https://edu-cyberpg.com/Technology/Agre.html
THE DARK SIDE
#T-Mobile, #Apple, #Blackberry are disgusting surveillance tools
Engadget: T-Mobile confirms data breach affects over 47 million people.
As part of its ongoing data breach investigation, T-Mobile has confirmed the enormity of the stolen information. Roughly 47.8 million current and former or prospective customers have been affected by the cyberattack on its systems, the carrier confirmed on Wednesday. Of that number, about 7.8 million are current T-Mobile postpaid accounts and the rest are prior or potential users who had applied for credit, the company added in a press release.
https://www.engadget.com/t-mobile-data-breach-affected-people-103104868.html
T-Mobile Investigating Claims of Massive Customer Data Breach
Hackers selling the data are claiming it affects 100 million users.
https://www.vice.com/en/article/akg8wg/tmobile-investigating-customer-data-breach-100-million
The T-Mobile Data Breach Is One You Can’t Ignore
Hackers claim to have obtained the data of 100 million people—including sensitive personal information.
https://www.wired.com/story/t-mobile-hack-data-phishing/
INCEL
Nazi, Proud Boy, Oath Keepers, Boogaloo, Trump, KKK, Hate
The main social networks, the ‘incel’ community remains as influential as it was in 2014, when an English 22-year-old killed seven people on the streets of Isla Vista, California, motivated by his hatred of women.”
https://www.theguardian.com/media/2021/aug/16/social-networks-struggle-to-crack-down-on-incel-movement
AI
Researchers fooled AI into ignoring stop signs using a cheap projector. “A trio of researchers at Purdue today published pre-print research demonstrating a novel adversarial attack against computer vision systems that can make an AI see – or not see – whatever the attacker wants.
https://thenextweb.com/news/researchers-tricked-ai-ignoring-stop-signs-using-cheap-projector
How GrayShift Keeps its iPhone Unlocking Tech Secret
Copies of non-disclosure and other agreements obtained by Motherboard show the kind of information that iPhone unlocker Grayshift tells police to keep secret. https://www.vice.com/en/article/m7e498/how-grayshift-keeps-its-iphone-unlocking-tech-secret
APPLE IS NOW A DISGUSTING PHONE
Is Apple’s NeuralMatch searching for abuse, or for people?
Apple stunned the tech industry on Thursday by announcing that the next version of iOS and macOS will contain a neural network to scan photos for sex abuse. Each photo will get an encrypted ‘safety voucher’ saying whether or not it’s suspect, and if more than about ten suspect photos are backed up to iCloud, then a clever cryptographic scheme will unlock the keys used to encrypt them. Apple staff or contractors can then look at the suspect photos and report them.
Apple’s child protection features spark concern within its own ranks
Apple’s device surveillance plan is a threat to user privacy — and press freedom
https://freedom.press/news/apples-device-surveillance-plan-is-a-threat-to-user-privacy-and-press-freedom/
Apple is now scanning your phone before anything gets to their server. It does not matter if you put it in the Icloud they also do this without internet using meshnet.
iPhone Neural Hash – SHOCKING AI Tech
We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous
An employee reconditions an iPhone in Sainte-Luce-sur-Loire, France, on Jan. 26. (Loic Venance/AFP/Getty Images)
Earlier this month, Apple unveiled <https://www.washingtonpost.com/business/apple-to-scan-us-phones-for-images-of-child-abuse/2021/08/05/e6c968ac-f61f-11eb-a636-18cac59a98dc_story.html?itid=lk_inline_manual_4> a system that would scan iPhone and iPad photos for child sexual abuse material (CSAM). The announcement sparked a civil liberties <https://www.eff.org/deeplinks/2021/08/apples-plan-think-different-about-encryption-opens-backdoor-your-private-life> firestorm <https://cdt.org/insights/international-coalition-calls-on-apple-to-abandon-plan-to-build-surveillance-capabilities-into-iphones-ipads-and-other-products/>, and Apple’s own employees have been expressing alarm <https://www.reuters.com/technology/exclusive-apples-child-protection-features-spark-concern-within-its-own-ranks-2021-08-12/>. The company insists reservations about the system are rooted in “misunderstandings <https://9to5mac.com/2021/08/06/apple-internal-memo-icloud-photo-scanning-concerns/>.” We disagree.
We wrote the only peer-reviewed publication on how to build a system like Apple’s <https://www.washingtonpost.com/opinions/2021/08/13/apple-csam-child-safety-tool-hashing-privacy/?itid=lk_inline_manual_5> — and we concluded the technology was dangerous. We’re not concerned because we misunderstand how Apple’s system works. The problem is, we understand exactly how it works.
Our research project <https://www.usenix.org/conference/usenixsecurity21/presentation/kulshrestha> began two years ago, as an experimental system to identify CSAM in end-to-end-encrypted online services. As security researchers, we know the value of end-to-end encryption, which protects data from third-party access. But we’re also horrified that CSAM is proliferating on encrypted platforms. And we worry online services are reluctant to use encryption without additional tools to combat CSAM.
We sought to explore a possible middle ground, where online services could identify harmful content while otherwise preserving end-to-end encryption. The concept was straightforward: If someone shared material that matched a database of known harmful content, the service would be alerted. If a person shared innocent content, the service would learn nothing. People couldn’t read the database or learn whether content matched, since that information could reveal law enforcement methods and help criminals evade detection.
Knowledgeable observers argued a system like ours was far <https://blog.cryptographyengineering.com/2019/12/08/on-client-side-media-scanning/> from feasible <https://www.eff.org/deeplinks/2019/11/why-adding-client-side-scanning-breaks-end-end-encryption>. After many false starts, we built a working prototype. But we encountered a glaring problem.
Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material <https://citizenlab.ca/2018/08/cant-picture-this-an-analysis-of-image-filtering-on-wechat-moments/>. India enacted rules this year <https://www.eff.org/deeplinks/2021/07/indias-draconian-rules-internet-platforms-threaten-user-privacy-and-undermine> that could require pre-screening content critical of government policy. Russia recently fined Google <https://www.reuters.com/technology/russia-fines-google-4-mln-roubles-failing-delete-content-tass-2021-05-25/>, Facebook <https://apnews.com/article/europe-russia-technology-government-and-politics-cea2b0203f13a2e6e17951f2eb570a31> and Twitter <https://apnews.com/article/media-moscow-social-media-europe-russia-cc0f314ee9e77811a81d15095c2dce18> for not removing pro-democracy protest materials.
We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.
We were so disturbed that we took a step we hadn’t seen before in computer science literature: We warned against our own system design, urging further research on how to mitigate the serious downsides. We’d planned to discuss paths forward at an academic conference this month.
That dialogue never happened. The week before our presentation, Apple announced <https://www.apple.com/child-safety/> it would deploy its nearly identical system on iCloud Photos, which exists on more than 1.5 billion devices <https://financialpost.com/technology/apple-tops-wall-street-expectations-on-record-iphone-revenue-china-sales-surge>. Apple’s motivation, like ours, was to protect children. And its system was technically more efficient and capable than ours. But we were baffled to see that Apple had few answers for the hard questions we’d surfaced.
China is Apple’s second-largest market <https://www.theverge.com/2015/4/27/8505063/china-is-now-apples-second-biggest-market>, with probably hundreds of millions of devices. What stops the Chinese government from demanding Apple scan those devices for pro-democracy materials? Absolutely nothing, except Apple’s solemn promise. This is the same Apple that blocked <https://www.washingtonpost.com/news/the-switch/wp/2017/07/31/apple-is-pulling-vpns-from-the-chinese-app-store-heres-what-that-means/?itid=lk_inline_manual_20> Chinese citizens from apps that allow access to censored material <https://www.washingtonpost.com/world/asia_pacific/holes-close-in-chinas-great-firewall-as-apple-amazon-snub-apps-to-bypass-censors/2017/08/02/77750f38-7766-11e7-803f-a6c989606ac7_story.html>, that acceded to China’s demand to store user data in state-owned data centers <https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html> and whose chief executive infamously declared <https://www.washingtonpost.com/world/asia_pacific/holes-close-in-chinas-great-firewall-as-apple-amazon-snub-apps-to-bypass-censors/2017/08/02/77750f38-7766-11e7-803f-a6c989606ac7_story.html>, “We follow the law wherever we do business.”
Apple’s muted response about possible misuse is especially puzzling because it’s a high-profile flip-flop. After the 2015 terrorist attack <https://www.washingtonpost.com/news/post-nation/wp/2015/12/05/fbi-investigating-san-bernardino-shooting-as-an-act-of-terrorism/> in San Bernardino, Calif., the Justice Department tried to compel <https://www.washingtonpost.com/news/post-nation/wp/2016/12/02/one-year-after-san-bernardino-police-offer-a-possible-motive-as-questions-still-linger/> Apple to facilitate access to a perpetrator’s encrypted iPhone. Apple refused, swearing in court filings that if it were to build such a capability once, all bets were off about how that capability might be used in future.
“It’s something we believe is too dangerous to do,” Apple explained <https://www.apple.com/customer-letter/answers/>. “The only way to guarantee that such a powerful tool isn’t abused … is to never create it.” That worry is just as applicable to Apple’s new system.
Apple has also dodged on the problems of false positives and malicious gaming, sharing few details about how its content matching works.
The company’s latest defense <https://www.wsj.com/articles/apple-executive-defends-tools-to-fight-child-porn-acknowledges-privacy-backlash-11628859600> of its system is that there are technical safeguards against misuse, which outsiders can independently audit. But Apple has a record <https://www.washingtonpost.com/technology/2021/08/16/apple-corellium-child-porn-iphone/> of obstructing security research. And its vague proposal <https://www.apple.com/child-safety/pdf/Security_Threat_Model_Review_of_Apple_Child_Safety_Features.pdf> for verifying the content-matching database would flunk an introductory security course.
Apple could implement stronger technical protections, providing public proof that its content-matching database originated with child-safety groups. We’ve already designed a protocol <https://twitter.com/jonathanmayer/status/1426540534517182464> it could deploy. Our conclusion, though, is that many downside risks probably don’t have technical solutions.
Apple is making a bet that it can limit its system to certain content in certain countries, despite immense government pressures. We hope it succeeds in both protecting children and affirming incentives for broader adoption of encryption. But make no mistake that Apple is gambling with security, privacy and free speech worldwide.
Apple drops intellectual property lawsuit against maker of security tools – Reed Albergotti
https://www.washingtonpost.com/technology/2021/08/10/apple-drops-corellium-lawsuit/
Apple settled its federal lawsuit Tuesday against Corellium, the maker of tools that allow security researchers to find software flaws in iPhones, according to court records.
BlackBerry resisted announcing major flaw in software powering cars, hospital equipment
https://www.politico.com/news/2021/08/17/blackberry-qnx-vulnerability-hackers-505649
BlackBerry resisted announcing major flaw in software powering cars, hospital equipment
The former smartphone maker turned software firm resisted announcing a major vulnerability until after federal officials stepped in.
By BETSY WOODRUFF SWAN and ERIC GELLER
08/17/2021 02:42 PM EDT
A flaw in software made by BlackBerry has left two hundred million cars, along with critical hospital and factory equipment, vulnerable to hackers — and the company opted to keep it secret for months.
On Tuesday, BlackBerry announced that old but still widely used versions of one of its flagship products, an operating system called QNX, contain a vulnerability that could let hackers cripple devices that use it. But other companies affected by the same flaw, dubbed BadAlloc, went public with that news in May.
Two people familiar with discussions between BlackBerry and federal cybersecurity officials, including one government employee, say the company initially denied that BadAlloc impacted its products at all and later resisted making a public announcement, even though it couldn’t identify all of the customers using the software.
The back-and-forth between BlackBerry and the government highlights a major difficulty in fending off cyberattacks on increasingly internet-connected devices ranging from robotic vacuum cleaners to wastewater-plant management systems. When companies such as BlackBerry sell their software to equipment manufacturers, they rarely provide detailed records of the code that goes into the software — leaving hardware makers, their customers and the government in the dark about where the biggest risks lie.
BlackBerry may be best known for making old-school smartphones beloved for their manual keyboards, but in recent years it has become a major supplier of software for industrial equipment, including QNX, which powers everything from factory machinery and medical devices to rail equipment and components on the International Space Station. BadAlloc could give hackers a backdoor into many of these devices, allowing bad actors to commandeer them or disrupt their operations.
Microsoft security researchers announced in April that they’d discovered the vulnerability and found it in a number of companies’ operating systems and software. In May, many of those companies worked with the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency to publicly reveal the flaws and urge users to patch their devices.
BlackBerry wasn’t among them.
Privately, BlackBerry representatives told CISA earlier this year that they didn’t believe BadAlloc had impacted their products, even though CISA had concluded that it did, according to the two people, both of whom spoke anonymously because they were not authorized to discuss the matter publicly. Over the last few months, CISA pushed BlackBerry to accept the bad news, eventually getting them to acknowledge the vulnerability existed.
Then BlackBerry said it didn’t intend to go public to deal with the problem. The company told CISA it planned to reach out privately to its direct customers and warn them about the QNX issue.
Technology companies sometimes prefer private vulnerability disclosures because doing so doesn’t tip off hackers that patching is underway — but also because it limits (or at least delays) any resulting public backlash and financial losses.
But that outreach would only cover a fraction of the affected companies, because BlackBerry also told CISA that it couldn’t identify everyone using its software in order to warn them.
That’s because BlackBerry licenses QNX to “original equipment manufacturers,” which in turn use it to build products and devices for their customers, just as Microsoft sells its Windows operating system to HP, Dell and other computer makers. BlackBerry told the government it doesn’t know where its software ends up, and the people using it don’t know where it came from. Its known customers are a comparatively small group.
“Their initial thought was that they were going to do a private advisory,” said a CISA employee. Over time, though, BlackBerry “realized that there was more benefit to being public.”
The agency produced a PowerPoint presentation, which POLITICO reviewed, stressing that many BlackBerry customers wouldn’t know about the danger unless the federal government or the original equipment manufacturers told them. CISA even cited potential risks to national security and noted that the Defense Department had been involved in finding an acceptable timing for BlackBerry’s announcement.
CISA argued that BlackBerry’s planned approach would leave out many users who could be in real danger. A few weeks ago, BlackBerry agreed to issue a public announcement. On Tuesday, the company published an alert about the vulnerability and urged customers to upgrade their devices to the latest QNX version. CISA issued its own alert as well.
In a statement to POLITICO, BlackBerry did not deny that it initially resisted a public announcement. The company said it maintains “lists of our customers and have actively communicated to those customers regarding this issue.”
“Software patching communications occur directly to our customers,” the company said. “However, we will make adjustments to this process in order to best serve our customers.”
QNX “is used in a wide range of products whose compromise could result in a malicious actor gaining control of highly-sensitive systems,” Eric Goldstein, the head of CISA’s cyber division, said. “While we are not aware of any active exploitation, we encourage users of QNX to review the advisory BlackBerry put out today and implement mitigation measures, including patching systems as quickly as possible.”
Goldstein declined to address CISA’s conversations with BlackBerry but said the agency “works regularly with companies and researchers to disclose vulnerabilities in a timely and responsible manner so that users can take steps to protect their systems.”
Asked about whether the company originally believed QNX was unaffected, Blackberry said its initial investigation into affected software “identified several versions that were affected, but that list of impacted software was incomplete.”
BlackBerry is hardly the first company to disclose a bug in widely used industrial software, and cybersecurity experts say such flaws are to be expected occasionally in highly complex systems. But resolving the QNX problem will be a major task for BlackBerry and the government.
In a June announcement about QNX’s integration into 195 million vehicles, BlackBerry called the operating system “key to the future of the automotive industry” because it provides “a safe, reliable, and secure foundation” for autonomous vehicles. BlackBerry bragged that QNX was the embedded software of choice of 23 of the top 25 electric vehicle makers.
The QNX vulnerability also has the Biden administration scrambling to prevent major fallout. Vulnerabilities in this code could have significant ripple effects across industries — from automotive to health care — that rely heavily on the software. In some cases, upgrading this software will require taking affected devices offline, which could jeopardize business operations.
“By compromising one critical system, [hackers] can potentially hit thousands of actors down that line globally,” said William Loomis, an assistant director at the Atlantic Council’s Cyber Statecraft Initiative. “This is a really clear example of a good return on investment for those actors, which is what makes these attacks so valuable for them.”
After analyzing the industries where QNX was most prevalent, CISA worked with those industries’ regulators to understand the “major players” and warn them to patch the vulnerability, the agency employee said.
Goldstein confirmed that CISA “coordinated with federal agencies overseeing the highest risk sectors to understand the significance of this vulnerability and the importance of remediating it.”
CISA also planned to brief foreign governments about the risks, according to the PowerPoint presentation.
BlackBerry is far from unique in knowing little about what happens to its products after it sells them to its customers, but for industrial software like QNX, that supply-chain blindness can create national security risks.
“Software supply chain security is one of America’s greatest vulnerabilities,” said Andy Keiser, a former top House Intelligence Committee staffer. “As one of the most connected societies on the planet, we remain one of the most vulnerable.”
But rather than expecting vendors to identify all of their customers, security experts say, companies should publish lists of the types of the code included in their software, so customers can check to see if they’re using code that has been found to be vulnerable.
“BlackBerry cannot possibly fully understand the impact of a vulnerability in all cases,” said David Wheeler, a George Mason University computer science professor and director of open source supply chain security at the Linux Foundation, the group that supports the development of the Linux operating system. “We need to focus on helping people understand the software components within their systems, and help them update in a more timely way.”
For years, the Commerce Department’s National Telecommunications and Information Administration has been convening industry representatives to develop the foundation for this kind of digital ingredient list, known as a “software bill of materials.” In July, NTIA published guidance on the minimum elements needed for an SBOM, following a directive from President Joe Biden’s cybersecurity executive order.
Armed with an SBOM, a car maker or medical device manufacturer that learned of a software issue such as the QNX breach could quickly check to see if any of its products were affected.
SBOMs wouldn’t prevent hackers from discovering and exploiting vulnerabilities, and the lists alone cannot tell companies whether a particular flaw actually poses a risk to their particular systems. But these ingredient labels can dramatically speed up the process of patching flaws, especially for companies that have no idea what software undergirds their products.
“Buying software is only the start of the transaction. It is not the end,” said Trey Herr, director of the Atlantic Council’s Cyber Statecraft Initiative.
“It’s not a new problem,” Herr added. “It’s not a problem that’s going away, and what we are doing right now is insufficient for the scale of that problem.”
You’ve Never Heard of the Biggest Digital Media Company in America
https://www.nytimes.com/2021/08/15/business/media/red-ventures-digital-media.html
THE THOUGHT POLICE ARE HERE
Florida Sheriff’s Office Now Notifying People It Will Be Inflicting Its Pre-Crime Program On Them
https://www.techdirt.com/articles/20210724/15223647236/florida-sheriffs-office-now-notifying-people-it-will-be-inflicting-pre-crime-program-them.shtml