Techno-Crime Newsletter 3/18/2024

Compiled by Walt ManningCEO, Techno-Crime Institutenewsletter@technocrime.comhttps://technocrime.com

This newsletter is distributed to everyone on our mailing list and provides links and insights regarding techno-crimes, investigations, security, and privacy.

Contents in this issue:

 

  1. Record breach of French government exposes up to 43 million people’s data
  2. Widely available video doorbells could grant strangers easy access, report claims
  3. Broke Cyber Pros Flock to Cybercrime Side Hustles
  4. A High School Deepfake Nightmare
  5. Meta Abandons Hacking Victims, Draining Law Enforcement Resources, Officials Say
  6. Researchers use fake charging station Wi-Fi to hack into and steal your Tesla
  7. Police must return phones after 175 million passcode guesses, judge says
  8. Cyberattack forces Canada’s financial intelligence agency to take systems offline
  9. Police warn of thieves using Wi-Fi jamming tech to disarm cameras, alarms
  10. A Vending Machine Error Revealed Secret Face Recognition Tech
  11. Your fingerprints can be recreated from the sounds made when you swipe on a touchscreen
  12. Senator asks FTC to investigate automakers’ data privacy practices
  13. AI Used to Resurrect Dead Dictator to Sway Election

______________________________________________

Record breach of French government exposes up to 43 million people’s data

(Mar. 14, 2024)

https://www.theregister.com/2024/03/14/mega_data_breach_at_french/

Government agencies worldwide talk about improving cybersecurity, but when the time comes to approve budgets to allocate resources and staffing, cybersecurity doesn’t receive the same support.

A French government department – responsible for registering and assisting unemployed people – is the latest victim of a mega data breach that compromised the information of up to 43 million citizens.

 

France Travail announced on Wednesday that it informed the country’s data protection watchdog (CNIL) of an incident that exposed a swathe of personal information about individuals dating back 20 years.

 

The department’s statement reveals that names, dates of birth, social security numbers, France Travail identifiers, email addresses, postal addresses, and phone numbers were exposed.

 

It’s been a tough month for France in terms of cybersecurity and data protection, too. Just a month ago, the country was contending with what was called the largest-ever data breach.

 

Data breaches at Viamedis and Almerys, two third-party payment providers for healthcare and insurance companies, led to more than 33 million people’s data being compromised.

Until there are significant consequences for providing inadequate cybersecurity for sensitive personal information, we’ll continue to see data breaches like this.

______________________________________________

Widely available video doorbells could grant strangers easy access, report claims

(Mar. 1, 2024)

https://www.emergingtechbrew.com/stories/2024/03/01/widely-available-video-doorbells-could-grant-strangers-easy-access-report-claims

Connected “smart” doorbells with cameras are increasingly popular, as people like the feeling of security they provide against potential “porch package thieves” and the ability to see who is at their front door via a smartphone app.

But convenience always trumps security, and there is growing evidence that many devices aren’t secure.

If you recently installed a smart doorbell from one of the internet’s most common online retailers, package thieves could be the least of your problems.

 

According to a report released Thursday by Consumer Reports, the popular Eken and Tuck video doorbells—sold by Amazon, Walmart, and other retailers for about $30—could provide backdoor access into users’ at-home networks.

 

They’re pretty cheap, which is why some people might have gone for them. But as we’ve seen with a lot of IoT products, when they’re super cheap, sometimes, they cut some corners,” CR’s director of tech policy, Justin Brookman, told us.

 

Major red flag? The most notable security vulnerability CR discovered? Any person standing near the doorbell could “pair” their phone with it and take control of the device, Brookman said. Even after the device owner regains control, the stranger could continue to access images from the camera, Bookman said. This leaves the devices ripe for exploitation by bad actors.

I could cite many more articles about the vulnerabilities of smart doorbells, but I’ll give you some other quotes from another article directly from Consumer Reports:

On a recent Thursday afternoon, a Consumer Reports journalist received an email containing a grainy image of herself waving at a doorbell camera she’d set up at her back door.

 

If the message came from a complete stranger, it would have been alarming. Instead, it was sent by Steve Blair, a CR privacy and security test engineer who had hacked into the doorbell from 2,923 miles away.

 

The security issues are serious. People who face threats from a stalker or estranged abusive partner are sometimes spied on through their phones, online platforms, and connected smartphone devices. The vulnerabilities CR found could allow a dangerous person to take control of the video doorbell on their target’s home, watching when they and their family members come and go.

 

First, these doorbells expose your home IP address and WiFi network name to the internet without encryption, potentially opening your home network to online criminals. Security experts worry there could be more problems, including poor security on the company servers where videos are being stored.

Think twice before purchasing a smart doorbell or any other device connected to your home or small office network.

Do your homework and see if there are any negative reviews about the device that might change your mind.

With current smart home Internet of Things technology, many devices may create more cybersecurity risks than the convenience is worth.

______________________________________________

Broke Cyber Pros Flock to Cybercrime Side Hustles

(Mar. 8, 2024)

https://www.darkreading.com/cybersecurity-operations/broke-cyber-pros-cybercrime-side-hustles

For several years, I’ve seen many articles about how we must catch up with the demand for cybersecurity professionals.

I’ve even referenced the problem in some of my keynotes and training over the years, talking about the potential threats from cybersecurity and IT insiders.

Now we are seeing that my warnings have been confirmed:

Cybersecurity professionals are finding it more attractive to take their talents to the Dark Web and earn money working on the offensive side of cybercrime. This puts enterprises in a tough spot: cut into profit growth to keep cybersecurity skills from flowing to the highest bidder, or figure out how to defend their networks against those who know their weaknesses most intimately.

 

Layoffs and consolidation across the cyber sector is ratcheting up the pressure on the remaining workers, while at the same time salary growth is stalling — making a cybercrime side hustle an increasingly attractive way for cyber pros to make ends meet, according to a new study out of the Chartered Institute of Information Security (CIISec), which analyzed Dark Web advertisements for cybercriminal services provided by professionals with cybersecurity day jobs.

 

Gartner predicts that by 2025, 25% of cybersecurity leaders will leave their roles due to stress. And despite layoffs in the cybersecurity sector, which have largely focused on non-technical roles in marketing, sales, and administration, there are still hundreds of thousands of open jobs in the US cybersecurity sector alone.

From a different article published by TechRadar:

At the same time, an estimated 30 to 70 percent of data, security and development job postings continue to go unfilled, and by 2025, the global shortage of full-time software developers and cybersecurity professionals is expected to reach eight million. Evidently, there aren’t enough people with the right skills to fill the roles organizations currently need.

 

Where spending on IT training has taken a dive, resources should instead be poured into a more holistic talent strategy. This should incorporate investment in diverse training, with a focus on identifying and developing future skills, and defining career paths for employees.

 

Despite the high demand for technology professionals, many job postings remain unfilled due to a skills mismatch. For there to be a change, IT organizations will need to take it upon themselves to act as talent incubators at the forefront of finding and generating the skills needed for the future. Without a radical transformation of how businesses attract and retain digital talent, they will significantly inhibit their ability to thrive.

Here are several example darknet advertisements from Cybernews:

One by a Python developer offers to “make VoIP [voice over internet] chatbots, group chatbots, AI chatbots, hacking, and phishing frameworks and much more” for around $30 an hour. The developer signs off by posting: “Xmas is coming and my kids need new toys.”

 

Another developer with almost a decade’s experience offers to make “phishing pages, bank cloning, market cloning […] crypto drainers, SMS spoofing, and email spoofing” and says they are “excited to try new projects out.”

 

A third offers to work hand in hand with AI, using large-language models to “help with coding, phishing, analyzing documents, and more,” with prices starting at $300.

The “more holistic talent strategy” is discussed in my book Techno-Crimes and the Evolution of Investigations. Although my recommendation is related to digital investigations, the same needs also justify this approach for cybersecurity.

______________________________________________

A High School Deepfake Nightmare

(Feb. 15, 2024)

https://www.404media.co/email/547fa08a-a486-4590-8bf5-1a038bc1c5a1/

Deepfake apps continue to be easier to find and use:

Police in Washington state were alarmed that administrators at a high school did not report that students used AI to take photos from other students’ Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children.

 

The police report makes clear that the images were created with a web-based “nudify” or “undress” app, which automatically and instantly edits photos of women to make them appear naked. The students who used the app to create naked images of other students told police they discovered the app on TikTok and posted some of them on Snapchat or showed them to other students at the lunch table at school. A student “reportedly admitted to making the photos,” the police report says. “[Redacted] went on to tell his friends that he found an app on TikTok for ‘naked AI.’ He then went onto [sic] Safari app and gave them a step by step of how it was done.

This is not just a problem in the United States:

AI-generated naked child images shock Spanish town of Almendralejo

 

A sleepy town in southern Spain is in shock after it emerged that AI-generated naked images of young local girls had been circulating on social media without their knowledge.

This is another example of how quickly technology is outpacing our laws and the ability of international law enforcement to address new techno-crimes we’ve never seen before.

______________________________________________

Meta Abandons Hacking Victims, Draining Law Enforcement Resources, Officials Say

(Mar. 6, 2024)

https://www.wired.com/story/meta-hacked-users-draining-resources/

Many of you know I’m not a fan of Meta, Google, or any company whose business model focuses on collecting and monetizing personal information.

Meta has experienced several data breaches and doesn’t put much emphasis on privacy or security.

But now we are seeing a pushback from law enforcement, rightly stating that it should be Meta’s responsibility to help customers whose accounts have been hacked:

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

 

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

 

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

 

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

 

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

 

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James says in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

 

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’s office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

Meta and other similar companies won’t willingly change their behavior soon because they are still making a lot of money.

That is why you won’t find me on any of their platforms.

The additional marketing visibility is not worth the potential revenue, and supporting these platforms is against my values.

If you are in law enforcement or investigations, consider whether the benefits you receive from these platforms are worth the risks.

______________________________________________

Researchers use fake charging station WiFi to hack into and steal your Tesla

(Mar. 10, 2024)

https://www.autoblog.com/2024/03/10/researchers-use-fake-charging-station-wifi-to-hack-into-and-steal-your-tesla/

Do you know if your car is hackable?

Are there any Tesla owners out there?

If you own a Tesla, you might want to be extra careful logging into the WiFi networks at Tesla charging stations.

 

Security researchers Tommy Mysk and Talal Haj Bakry of Mysk Inc. published a YouTube video explaining how easy it can be for hackers to run off with your car using a clever social engineering trick.

 

Here’s how it works.

 

Many Tesla charging stations — of which there are over 50,000 in the world — offer a WiFi network typically called “Tesla Guest” that Tesla owners can log into and use while they wait for their car to charge, according to Mysk’s video.

 

Using a device called a Flipper Zero — a simple $169 hacking tool — the researchers created their own “Tesla Guest” WiFi network. When a victim tries to access the network, they are taken to a fake Tesla login page created by the hackers, who then steal their username, password, and two-factor authentication code directly from the duplicate site.

 

Once the hackers have stolen the credentials to the owner’s Tesla account, they can use it to log into the real Tesla app, but they have to do it quickly before the 2FA code expires, Mysk explains in the video.

 

One of Tesla vehicles’ unique features is that owners can use their phones as a digital key to unlock their car without the need for a physical key card.

 

Once logged in to the app with the owner’s credentials, the researchers set up a new phone key while staying a few feet away from the parked car.

 

The hackers wouldn’t even need to steal the car right then and there; they could track the Tesla’s location from the app and go steal it later.

I’ve seen examples where the Flipper Zero device referenced in the article can also be used to clone keyless entry cards, open garage doors, or ring an intelligent doorbell from a distance. The device can supposedly read some RFID cards and codes from several vehicle key fobs.

For more information about the Flipper Zero device:

https://flipperzero.one/

______________________________________________

Police must return phones after 175 million passcode guesses, judge says

(Jan. 5, 2024)

https://ottawacitizen.com/news/local-news/police-must-return-phones-after-175-million-passcode-guesses-judge-says

Technology makes it more challenging for law enforcement and all investigators to extract digital evidence from devices.

Do we need to draw a line to define what constitutes where a “reasonable” effort must end?

Here is an interesting example where password-protected phones were seized in an alleged pedophile case:

The police seized the phones in October 2022 with a warrant obtained based on information about a Google account user uploading images of child pornography. The contents of the three phones were all protected by complex, alpha-numeric passcodes.

 

Ontario Superior Court Justice Ian Carter heard that police investigators tried about 175 million passcodes in an effort to break into the phones during the past year.

 

The problem, the judge was told, is that more than 44 nonillion potential passcodes exist for each phone.

 

To be more precise, the judge said, there are 44,012,666,865,176,569,775,543,212,890,625 potential alpha-numeric passcodes for each phone.

 

It means, Carter said, that even though 175 million passcodes were attempted, those efforts represented “an infinitesimal number” of potential answers.

 

In his ruling, Carter said the court had to balance the property rights of an individual against the state’s legitimate interest in preserving evidence in an investigation. The phones, he said, have no evidentiary value unless the police succeed in finding the right passcodes.

On the other hand, should investigators be given a time limit by which they need to complete a digital forensics analysis in any case and then be forced to return the mobile device, computer, car, smart home assistant, or whatever device has been seized due to the potential existence of digital evidence?

This is another example where our laws and case precedents have yet to keep up with technology.

What do you think?

______________________________________________

Cyberattack forces Canada’s financial intelligence agency to take systems offline

(Mar. 5, 2024)

https://therecord.media/canada-fintrac-cyberattack-systems-offline

Combining the French data breach with this one (and the others mentioned in the article), 2024 has not been a good year for government cybersecurity.

Canada’s financial intelligence agency FINTRAC has announced pulling its corporate systems offline due to a cyber incident that struck over the weekend.

 

FINTRAC, the Financial Transactions and Reports Analysis Centre of Canada, is the Ottawa-based government body founded to detect and investigate money laundering and similar crimes.

 

It follows what was described as an “alarming” cyberattack targeting the Royal Canadian Mounted Police (RCMP) late last month, although again the nature of the attack was not disclosed.

 

Earlier this year, Canada’s foreign ministry discovered “malicious cyber activity” on its network that allowed hackers access to personal information. It is not known whether this was a criminal or state-sponsored breach.

 

That came after a separate incident in which data on current and former members of the country’s armed forces and the RCMP was compromised after a contractor providing relocation services for government personnel was hacked.

Suppose you combine the significant Microsoft and Hewlett-Packard breaches with the earlier article about cybersecurity professionals moving to cybercrime. In that case, I can’t wait to see what the rest of the year will bring.

______________________________________________

Police warn of thieves using Wi-Fi jamming tech to disarm cameras, alarms

(Mar. 4, 2024)

https://ktla.com/news/local-news/police-warn-of-thieves-using-wifi-jamming-tech-to-disarm-cameras-alarms/

The various components for many home security systems are now connected via Wi-Fi instead of having to run wiring throughout a home as in years past.

However, the techno-criminals are now evolving:

Authorities with the Los Angeles Police Department are warning residents in Los Angeles’ Wilshire-area neighborhoods of a series of burglaries involving wifi-jamming technology that can disarm surveillance cameras and alarms using a wireless signal.

The article includes several tips to improve the security of your home, but one of them is:

Consider moving to an alarm system that is hard wired rather than running on a wireless signal.

I’ll be speaking at the 35th Annual Global Fraud Conference in June. The title of my presentation is “What Will You Do When Your Digital Footprint Helps the Shadows to Come After You?”

Operational security, or “OPSEC,” is something that many investigators don’t think about, but come and listen to my presentation and see if I can change your mind about whether you should pay more attention to your own OPSEC.

______________________________________________

A Vending Machine Error Revealed Secret Face Recognition Tech

(Feb. 24, 2024)

https://www.wired.com/story/facial-recognition-vending-machine-error-investigation/

A student investigation at the University of Waterloo uncovered a system that scanned countless undergrads without consent.

How many of you would suspect a vending machine would have built-in facial recognition?

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting face recognition data without their consent.

 

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.

 

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

The use of technology to collect data, especially biometric data, should be based on transparency and trust.

If there is a disconnect or a failure to inform targets of this technology, companies and the government should not be surprised when they receive an adverse reaction.

This is another example of why we need laws to regulate this type of biometric data collection technology.

Surveillance capitalism is obviously out of control and needs to be limited.

______________________________________________

Your fingerprints can be recreated from the sounds made when you swipe on a touchscreen — Researchers new side channel attack can reproduce partial fingerprints to enable attacks

(Feb. 19, 2024)

https://www.tomshardware.com/tech-industry/cyber-security/your-fingerprints-can-be-recreated-from-the-sounds-made-when-you-swipe-on-a-touchscreen-researchers-new-side-channel-attack-can-reproduce-partial-fingerprints-to-enable-attacks

Do you use Touch-ID or any other security application that uses your fingerprint to authenticate your identity?

An interesting new attack on biometric security has been outlined by a group of researchers from China and the US. PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the Finger Friction Sound [PDF] proposes a side-channel attack on the sophisticated Automatic Fingerprint Identification System (AFIS). The attack leverages the sound characteristics of a user’s finger swiping on a touchscreen to extract fingerprint pattern features. Following tests, the researchers assert that they can successfully attack “up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%.” This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.

 

Biometric fingerprint security is widespread and widely trusted. If things continue as they are, it is thought that the fingerprint authentication market will be worth nearly $100 billion by 2032. However, organizations and people have become increasingly aware that attackers might want to steal their fingerprints, so some have started to be careful about keeping their fingerprints out of sight, and become sensitive to photos showing their hand details.

I’m not sure that this technology is advanced enough to pose much of a threat in real-world applications, but the fact that researchers continue to point out the potential for abuse is not surprising.

Is using your fingerprint convenient?

Yes.

Just realize that the techno-criminals also read this research, and it won’t be long before they find a way to use it to bypass your security.

______________________________________________

Senator asks FTC to investigate automakers’ data privacy practices

(Feb. 28, 2024)

https://therecord.media/senator-asks-ftc-to-investigate-automaker-privacy-practices

Have you ever considered whether your car might be collecting data about you that the manufacturer could sell?

Calling automakers’ responses to his demand for answers “evasive and vague,” Sen. Edward Markey (D-MA) on Wednesday called on Federal Trade Commission (FTC) Chair Lina Khan to investigate the car industry’s data privacy practices.

 

In a letter to Khan, Markey said in December, he asked 14 major car manufacturers to offer transparency on how they implement and enforce privacy protections in their vehicles. Markey told Khan that the answers he received were far from clear and even prevaricating.

 

Most automakers surveyed acknowledged they only provide the right to delete data to consumers in states where the automaker is legally required to do so, the senator said. Markey also told Khan that the manufacturers “largely failed to answer whether they collect more data than is needed for the service provided, whether a consumer loses certain vehicle functionality by refusing to consent to the data collection, or whether the manufacturers have suffered a cyberattack in the last ten years.

 

Markey urged Khan to act now, saying the industry has been operating with little or no oversight for years even as stalkers have been able to exploit location data to harass victims and many Americans have increasingly demanded answers on how car manufacturers treat and monetize their private data.

Once again, we have an example of data collection that is not transparent or gives consumers a clear-cut ability to “opt out” of these types of data collection practices.

On the other hand, if the federal government in the U.S. has yet to enact a comprehensive personal privacy law, why would we expect companies to respect their customers’ right to privacy?

From an investigative perspective, could the data collected by a suspect’s vehicle be valuable as evidence?

I think we are reaching a point where law enforcement and investigative professionals may be faced with the choice of whether accessing this type of personal information is ethical, even if it might be legal under current laws.

______________________________________________

AI Used to Resurrect Dead Dictator to Sway Election

(Feb. 13, 2024)

https://futurism.com/the-byte/ai-resurrect-dead-dictator

There will be many important elections this year, and we’ve already seen many examples of deepfake technology being used to influence outcomes.

However, here is an imaginative example that I hadn’t considered:

An Indonesian political party used generative AI to “resurrect” one of the most violent political figures in the nation’s history in a bizarre, deepfaked endorsement message, CNN reports — the latest, and possibly strangest, use of generative AI in the world of politics, elections, and information.

 

The figure pictured in the deepfake — first shared to X-formerly-Twitter on January 6 — is the former Indonesian dictator Suharto, whose US-backed New Order Regime is estimated to have killed anywhere between 500,000 and about a million Indonesians. Suharto’s brutal regime lasted over three decades, until mass unrest caused him to formally resign in 1998. He died in 2008.

Be thoughtful about any photos or videos you see this year that might influence your vote.

They might be real, but they might be faked.

Choose wisely.

______________________________________________

The Techno-Crime Newsletter is a free monthly newsletter providing information and opinions about techno-crimes, cybersecurity tools and techniques, privacy, and operational security for investigators. To subscribe or to read past issues, see The Techno-Crime Newsletter Archive web page.

Please feel free to forward this newsletter to anyone who will find the information interesting or useful. You also have our permission to reprint The Techno-Crime Newsletter, as long the entire newsletter is reprinted.

Walt Manning is an investigations futurist who researches how technology is transforming crime and how governments, legal systems, law enforcement, and investigations will need to evolve to meet these new challenges. Walt started his career in law enforcement with the Dallas Police Department and then went on to manage e-discovery and digital forensics services for major criminal and civil litigation matters worldwide. He is the author of the thought-provoking book Techno-Crimes and the Evolution of Investigations, where he explains why technology will force investigations to evolve. Walt is an internationally recognized speaker and author known for his ability to identify current and impending threats from technology and advise his clients and audiences about ways to minimize their risk. In addition to many published articles, he has been interviewed and widely quoted in the media as an expert on topics related to technology crime and investigations.

Copyright © 2024 by The Techno-Crime Institute Ltd.

If you are not currently subscribed to our mailing list, and would like to receive The Techno-Crime Newsletter in the future, fill out the form below…

    SubscribeBuilt with ConvertKit