► ► Educational CyberPlayGround®, Inc. 1999 https://edu-cyberpg.com
► ► Blog https://CyberPlayGround.org ©
► ► NetHappenings Newsletter ©1989 email subscribe / unsubscribe
► ► K12 School Directory © http://k12playground.com
► ► Twitter @Cyberplayground @NetHappenings @K12Playground
THIS IS VERY VERY LONG……
MAKE SURE YOU SCROLL ALL THE WAY DOWN
A Casino Gets Hacked Through a Fish-Tank Thermometer
https://www.entrepreneur.com/article/368943
Are your fish tanks secure? Secure yourlaptop. Secure your smart phone. Secure your tablet. And, before I forget, secure your fish tank. Yes, you heard me. Your fish tank. That was the lessoned learned a few years ago from the operators of a North American casino. According to a 2018 Business Insider report, cybersecurity executive Nicole Eagan of security firm Darktrace told the story while addressing a conference.
“The attackers used that (a fish-tank thermometer) to get a foothold in the network,” she recounted. “They then found the high-roller database and then pulled that back across the network, out the thermostat, and up to the cloud.” Can this really be possible? It certainly can. And you can blame the Internet of Things.
AI Security: How Human Bias Limits Artificial Intelligence
April 15, 2021 | By Mark Stone
https://securityintelligence.com/articles/ai-security-human-bias-artificial-intelligence/
For cybersecurity experts, artificial intelligence (AI) can both respond to and predict threats. But because AI security is everywhere, attackers are using it to launch more refined attacks. Each side is seemingly playing catch-up, with no clear winner in sight.
How can defenders stay ahead? To gain context about AI that goes beyond prediction, detection and response, our industry will need to ‘humanize’ the process. We’ve explored some of the technical aspects of AI, like how it can both prevent and launch direct-denial-of-service attacks, for instance. But to get the most out of it in the long run, we’ll need to take a social sciences approach instead.
What AI Security Can’t Do
First, let’s establish what AI and machine learning are. AI, much like its name, represents the higher concept of machines carrying out ‘smart’ tasks. Machine learning (ML) is a subset of AI. It provides data to computers so they can process that data and learn for themselves. Whether it’s AI or machine learning, algorithms are built based on data that determine what patterns are expected and what are considered abnormal.
The best AI requires data scientists, statistics and as much human input as possible. As you train it, AI learns to create results that may not be visible to the human running it. It can even make judgments based on data for which you didn’t train it. This ‘black box’ nature means there’s also a push to make AI that can reveal how it makes decisions.
No matter how well AI trains itself, human oversight and input are key to its success. That’s the takeaway from Julie Carpenter, research fellow in the ethics and emerging sciences group at California Polytechnic State University.
“Every decision you make in AI should have a human in the loop at this point,” she says. “We don’t have any sort of genius AI that understands human context, or human ways of life or sentience. Some sort of oversight is necessary.”
AI Can’t Outthink Us
Carpenter explains that AI’s original goal is to replicate human-like thinking, an attempt that remains true today for most AI products. AI cybersecurity — and AI in general — is there to serve humans in one way or another, she said. But it still doesn’t understand human context, culture or meaning.
The belief that AI will, sometime in the future, outsmart and outthink us is incorrect, Carpenter said. She also shared her strong doubts about the current state of AI reading emotion. ‘Affective’ AI like this is being used in advertising to try to read consumers’ attitudes toward products and marketing campaigns.
“I don’t think it’s necessarily a good direction for AI to go,” she warned. “How can we teach AI to do something we (ourselves) cannot do — which is perfectly read each other’s emotions?”
How AI Bias Hurts Cybersecurity
Is artificial intelligence a threat? Maybe not in the science fiction sense of machines taking over the world. But it does open up new avenues of attack. And because AI is trained by humans, it can include human bias — or fail to account for human bias. Instead of approaching AI security from an external standpoint (i.e. preventing breaches) we must also consider the impact it might have internally.
Suppose you decide you’re going to start using AI to prevent breaches in your company. In that case, you may not want to worry so much about how to block clever threat actors. Instead, you should worry more about how to keep your own users, customers or employees safe. By using AI security in some form, are you putting them at risk? In today’s threat landscape, where personal devices are on corporate networks with people working from home, enterprise networks are handling much more personal traffic than ever before.
How to Overcome Bias
Carpenter advises that companies look for the broader impacts that go beyond just the intended use of the AI product.
In our industry, protecting personal information is critical. But what happens when AI security glosses over something that may, at first glance, seem harmless but is, in fact, sensitive to certain groups?
Carpenter offers an example. Let’s say a company suffers a data breach in which the only information that leaked was employees’ genders. For many people, that might not be a concern.
“But having someone’s gender hacked and put out there could be a really big deal for a lot of people,” she said. “It could be life changing … devastating … traumatizing … because gender is such a complicated social and cultural issue.”
Depending on what kind of service you handle and what kind of data is linked, you may have different kinds of outcomes.
The Limits on ‘Reading People’
Another potential pitfall for the use of AI in cybersecurity is with advanced biometrics — especially when it comes to specifics like facial expressions. Even looking ahead into the 2040s, Carpenter is skeptical that AI will understand visual cues. The subtleties, nuances and cultural differences are simply too complex.
“It’s going to disregard context, situations and suggestiveness,” she says. “You could have a frown on your face and the AI technology thinks that you’re frustrated or angry. But you pull back the picture, and the person is standing while they’re reading a book, and they’re actually just concentrating. It doesn’t really matter what other biometrics you triangulate it with. It’s a guessing game.”
Remember Ethical Frameworks
One piece of ‘low-hanging-fruit’ companies can take from a user perspective, Carpenter advises, is to look at things like the General Data Protection Regulation (GDPR) and any protocols that talk about the user’s rights and think about an ethical framework built on those rights.
“If you look at things like the rights for the citizen section of the GDPR, it explicitly defines what my rights are as a user and as a data person,” she says. “If my data is incorrect, how do I fix it, how can I get organizations to stop disseminating false data about me? These are the ethical questions that are out there, and things that are user-centered that can be a starting point for discussions in organizations.”
With any type of strategic planning, having the right people in place is a crucial element for success. With AI security, it’s no different.
Checklist for Working With AI
Carpenter insists organizations should have an important initial discussion about AI security and answer several key questions:
What are the goals of using AI, even beyond the business goals?
How does the organization think of AI as a concept?
What should the AI do, and what shouldn’t it do?
What is it we’re artificially replicating with AI?
Whose intelligence are we artificially replicating?
How will this intelligence be used?
What do we want the intelligence to do that goes above and beyond its primary functionality?
“There needs to be explicit discussions, smaller discussions and micro discussions between and within the teams and working groups,” she says. “We also need to make decisions about what to include and not to include, what to code and not to code, how to promote the product or not promote their product, who do we give it to and who we are designing it for.”
What’s Next for AI Security?
Carpenter recalls a recent talk with another very large tech company in which she asked how their AI security handles a huge data breach. Beyond its uses, she was curious about what the company learned about the group that carried out the attack.
“We’re not detectives,” the executive told her, “and all we can do is put a cork back in the leak and move on to predicting how they might attack us again.”
This type of reactive, short-term thinking is often the best we can do to keep up with the cycle of prediction, detection and response. Carpenter hopes that in the long term, cybersecurity can leverage people in social sciences more. They could help AI find forensic patterns, cultural patterns, how attacks were happening, who is behind the attacks and what their motivations are. When programmed and put in place correctly, AI security could someday predict and forecast how future events might emerge.
Use Some AI … But Not Too Much
“AI should provide more refined insights, not so much in terms of quantity but in terms of quality,” Carpenter says. “Because you’re looking at this diverse set of rules, and you’re not stuck in an echo chamber with the same ideas and the same concepts. Frankly, if I was working in cybersecurity, and I was working in an organization with everybody throwing around the term AI (too much), I’d be a little concerned.”
Cybersecurity experts, she suggests, must learn to think like social scientists, taking a step back, so everyone in the enterprise is on the same page — increasing communication to help everybody’s plan.
“People from social sciences are specifically trained to help you give AI more understanding,” she says.
Better AI Security By Thinking Like a Human
In fact, it’s difficult not to come away with the perception that winning in cybersecurity is about taking human psychology and social sciences into account in other areas, too. Almost anyone who has instilled a culture of awareness in their enterprise will tell you that they’re much more confident about their security posture.
Learning about, adopting and getting the most out of AI security is no different. The more we understand about the human element and the more we add that understanding into AI input, the better off we’ll be as an industry.
So where did that cache of 500 million Facebook phone numbers come from? @lilyhnewman got to the bottom of it. Turns out it was scraped from the site directly by exploiting an undisclosed vulnerability in the site’s contact import feature, which allowed attackers to create a massive address book with millions of phone numbers in order to “match” those numbers against existing Facebook accounts. Facebook never fully disclosed the issue, instead this past week pointed back to similar — but only tangentially related — stories.
Motherboard: Cool, how about one more? There’s yet another cache of Facebook phone numbers in the form of a Telegram bot. @josephfcox ran the numbers.
Clearview AI, the controversial facial recognition app
BuzzFeed News: Breathtakingly good reporting here. BuzzFeed News found more than 7,000 users from close to 2,000 public agencies using Clearview AI, the controversial facial recognition app that checks faces against a database of 3 billion images scraped from social media sites. BuzzFeed News published the results in a searchable table — including ICE, the Air Force, and even public schools. This is incredible work that took the reporters over a year to complete.
THE BOTNET
Bitcoin should become a global, universal currency. In this context, asymmetric threats like embedded illegal data become a major challenge.
Akamai has reported on a new method: a botnet that uses the Bitcoin blockchain ledger. Since the blockchain is globally accessible and hard to take down, the botnet’s operators appear to be safe.
There’s even illegal pornography and leaked classified documents. All of these were put in by anonymous Bitcoin users. But none of this, so far, appears to seriously threaten those in power in governments and corporations. Once someone adds something to the Bitcoin ledger, it becomes sacrosanct. Removing something requires a fork of the blockchain, in which Bitcoin fragments into multiple parallel cryptocurrencies (and associated blockchains). Forks happen, rarely, but never yet because of legal coercion. And repeated forking would destroy Bitcoin’s stature as a stable(ish) currency.
The botnet’s designers are using this idea to create an unblockable means of coordination, but the implications are much greater. Imagine someone using this idea to evade government censorship. Most Bitcoin mining happens in China. What if someone added a bunch of Chinese-censored Falun Gong texts to the blockchain?
Direct line. Now #IndictTrump
On the Insecurity of ES&S Voting Machines’ Hash Code
It turns out that ES&S has bugs in their hash-code checker: if the “reference hashcode” is completely missing, then it’ll say “yes, boss, everything is fine” instead of reporting an error. It’s simultaneously shocking and unsurprising that ES&S’s hashcode checker could contain such a blunder and that it would go unnoticed by the U.S. Election Assistance Commission’s federal certification process. It’s unsurprising because testing naturally tends to focus on “does the system work right when used as intended?” Using the system in unintended ways (which is what hackers would do) is not something anyone will notice.
Also:
Another gem in Mr. Mechler’s report is in Section 7.1, in which he reveals that acceptance testing of voting systems is done by the vendor, not by the customer. Acceptance testing is the process by which a customer checks a delivered product to make sure it satisfies requirements. To have the vendor do acceptance testing pretty much defeats the purpose.
Capitol Police ignored intelligence warnings ahead of Jan. 6 riots, watchdog report finds
The Capitol Police ignored critical intelligence ahead of the Jan. 6th riot, including overlooking a warning that, “Congress itself is the target,” according to an internal watchdog report obtained by NBC News.
The police force tasked with protecting the U.S. Capitol also lacked policies and procedures that left them severely unprepared to deal with the deadly insurrection, the 104-page report prepared by the Capitol Police’s inspector general found. The report has not been made public.
Pennsylvania GOP launches ‘super MAGA Trump’ primary
Never mind Pittsburgh and Philadelphia. Palm Beach, Fla., is where the party’s Senate nomination is likely to be decided.
“There’s no denying that the Republican Party in Pennsylvania is still a party of Trump.” Steve Bannon, a former White House chief strategist to Trump, told POLITICO that “any candidate who wants to win in Pennsylvania in 2022 must be full Trump MAGA.”
US formally names Russian Foreign Intelligence Service (SVR) as the culprit in SolarWinds hack
The former president of the united states of America acted like a Russian operative for 4 years, blew the pandemic response, got Covid, crashed the economy, insulted everyone in the world, hasn’t conceded, and incited an ongoing insurrection and violent attack on the capitol
For the first time EVER, the US government said Russian agent Konstantin Kilimnik provided Russian intelligence agencies with the internal Trump campaign polling/strategy data he received from Manafort and Gates in 2016. Even Mueller didn’t go that far.
We knew Trump 2016 polling data went from Manafort > Kilimnik. Today, Treasury says that data went from Kilimnik > Russian intelligence agencies.
https://home.treasury.gov/news/press-releases/jy0126
For the first time EVER, the US government said Russian agent Konstantin Kilimnik provided Russian intelligence agencies with the internal Trump campaign polling/strategy data he received from Manafort and Gates in 2016. Even Mueller didn't go that far. https://t.co/wGNnHdFxRU pic.twitter.com/O9GEjpJ0CG
— Marshall Cohen (@MarshallCohen) April 15, 2021
Here’s KK with long term buds Manafort and….look, it’s Bernie’s 2016 Chief Strategist Tad Devine!
One of the most under-talked about pieces of the Mueller report. Manafort met Kilimnik to discuss polling data & Trump campaign strategy in the Midwest, but also discussed the Russian belief that Trump needed to win in order for Russia to effectively control Eastern Ukraine.
See https://home.treasury.gov/policy-issues/financial-sanctions/recent-actions/20210415