Techno-Crime Newsletter 7/8/2024

Compiled by Walt ManningCEO, Techno-Crime Institutenewsletter@technocrime.com

This newsletter is distributed to everyone on our mailing list and provides links and insights regarding techno-crimes, investigations, security, and privacy.

Contents in this issue:

  1. At Microsoft, years of security debt come crashing down
  2. The Future of The Cybersecurity Profession with The Rise Of AI
  3. Resemble AI’s next-generation AI audio detection model, Detect-2B is 94% accurate
  4. Data centres in space could be one solution to AI’s big energy problem
  5. What to know about the massive car dealership outage
  6. This Viral AI Chatbot Will Lie and Say It’s Human
  7. Users ‘Jailbreak’ AI Video Generator To Make Porn
  8. Hackers Working for Lucrative Cyber Attack Industry See Law Firms as Rich Targets
  9. Hacker Releases Jailbroken “Godmode” Version of ChatGPT
  10. Crooks plant backdoor in software used by courtrooms around the world
  11. ‘AI Platforms Will Control What Everybody Sees,’ Says Meta’s AI Chief Yann LeCun


At Microsoft, years of security debt come crashing down

(Apr. 30, 2024)

Microsoft has a long history of providing software that is later found to have vulnerabilities, which has been true ever since the first version of Microsoft Windows was published in 1985.

Much of their software is cloud-based, and like most software companies, they encourage users to use these products.

But in recent years there have been several substantial breaches and vulnerabilities related to Microsoft products and services that have resulted in significant damage to their customers, as well as Microsoft’s reputation for reliability:

Years of accumulated security debt at Microsoft are seemingly crashing down upon the company in a manner that many critics warned about but few ever believed would actually come to light.

“It’s certainly not the first time a nation-state adversary has breached Microsoft’s cloud environments and after so many instances, empty promises of improved security are no longer enough,” Adam Meyers, SVP of counter adversary operations at CrowdStrike, said via email.

In January, Microsoft said a Russia-backed threat group called Midnight Blizzard, gained access to emails, credentials and other sensitive information from top Microsoft executives, certain corporate customers and a number of federal agencies.

Then in early April, the federal Cyber Safety Review Board released a long-anticipated report which showed the company failed to prevent a massive 2023 hack of its Microsoft Exchange Online environment. The hack by a People’s Republic of China-linked espionage actor led to the theft of 60,000 State Department emails and gained access to other high-profile officials.

Many Microsoft products have become a de-facto standard for many users and organizations.

But alternatives do exist.

Where do you draw the line when a software company’s track record only worsens over time instead of improving customer security?

Looking at platforms designed for security and privacy might be a better idea.

The Proton service platform, which already provides secure and encrypted email, calendar, contacts, and data storage, has just added a secure Docs service with the same level of privacy and security protection they include with all their products.

It may be time for an alternative like this.


The Future of The Cybersecurity Profession with The Rise Of AI

(Jul. 3, 2024)

Many articles and research papers discuss Artificial Intelligence’s potential risks and benefits.

But how will AI impact cybersecurity, investigations, and auditing in the coming years?

I don’t think that AI will replace every job in these professions, but everyone working in any of these roles will need to understand enough about this technology to be able to work with AI in the future.

AI is one of the fastest-developing technologies today, and that can make it complicated to keep up.

However, the complexity introduced by the rapid adoption of AI and other emerging technologies (plus the creation of larger digital ecosystems and the increased sophistication of cyber threats) points toward the need for a more holistic set of skills per function for cybersecurity to be holistic and effective. Cybersecurity expertise is critical per function, but looking into cybersecurity in a silo is causing many of the failures we see today.

There is a need for cybersecurity professionals to understand the business context of the ecosystem they’re trying to protect in order for them to be targeted and relevant when applying or embedding cybersecurity. There is also a need to understand adjacent domains like audit, privacy, risk, and digital technology governance and management to ensure cybersecurity is integrated to the needs of those domains and not siloed.

Finally, they need to understand emerging technologies like AI, as one cannot protect what they do not understand—whether this means identifying risks and building controls or conducting forensics and investigations in ecosystems that embed AI.

AI won’t replace cybersecurity professionals, but it will transform the profession. AI will reshape many cybersecurity roles so that practitioners can focus their time and attention on what humans do best—devising strategy, setting policy, thinking creatively, addressing the human element and motives of attackers, applying negotiation tactics, and monitoring the operation of AI itself while applying ethical standards.

Will you be able to evolve and grow with AI?

If not, you may be left behind.


Broke Cyber Pros Flock to Cybercrime Side Hustles

(Mar. 8, 2024)

For several years, I’ve seen many articles about how we must catch up with the demand for cybersecurity professionals.

I’ve even referenced the problem in some of my keynotes and training over the years, talking about the potential threats from cybersecurity and IT insiders.

Now we are seeing that my warnings have been confirmed:

Cybersecurity professionals are finding it more attractive to take their talents to the Dark Web and earn money working on the offensive side of cybercrime. This puts enterprises in a tough spot: cut into profit growth to keep cybersecurity skills from flowing to the highest bidder, or figure out how to defend their networks against those who know their weaknesses most intimately.

Layoffs and consolidation across the cyber sector is ratcheting up the pressure on the remaining workers, while at the same time salary growth is stalling — making a cybercrime side hustle an increasingly attractive way for cyber pros to make ends meet, according to a new study out of the Chartered Institute of Information Security (CIISec), which analyzed Dark Web advertisements for cybercriminal services provided by professionals with cybersecurity day jobs.

Gartner predicts that by 2025, 25% of cybersecurity leaders will leave their roles due to stress. And despite layoffs in the cybersecurity sector, which have largely focused on non-technical roles in marketing, sales, and administration, there are still hundreds of thousands of open jobs in the US cybersecurity sector alone.

From a different article published by TechRadar:

At the same time, an estimated 30 to 70 percent of data, security and development job postings continue to go unfilled, and by 2025, the global shortage of full-time software developers and cybersecurity professionals is expected to reach eight million. Evidently, there aren’t enough people with the right skills to fill the roles organizations currently need.

Where spending on IT training has taken a dive, resources should instead be poured into a more holistic talent strategy. This should incorporate investment in diverse training, with a focus on identifying and developing future skills, and defining career paths for employees.

Despite the high demand for technology professionals, many job postings remain unfilled due to a skills mismatch. For there to be a change, IT organizations will need to take it upon themselves to act as talent incubators at the forefront of finding and generating the skills needed for the future. Without a radical transformation of how businesses attract and retain digital talent, they will significantly inhibit their ability to thrive.

Here are several example darknet advertisements from Cybernews:

One by a Python developer offers to “make VoIP [voice over internet] chatbots, group chatbots, AI chatbots, hacking, and phishing frameworks and much more” for around $30 an hour. The developer signs off by posting: “Xmas is coming and my kids need new toys.”

Another developer with almost a decade’s experience offers to make “phishing pages, bank cloning, market cloning […] crypto drainers, SMS spoofing, and email spoofing” and says they are “excited to try new projects out.”

A third offers to work hand in hand with AI, using large-language models to “help with coding, phishing, analyzing documents, and more,” with prices starting at $300.

The “more holistic talent strategy” is discussed in my book Techno-Crimes and the Evolution of Investigations. Although my recommendation is related to digital investigations, the same needs also justify this approach for cybersecurity.


A High School Deepfake Nightmare

(Feb. 15, 2024)

Deepfake apps continue to be easier to find and use:

Police in Washington state were alarmed that administrators at a high school did not report that students used AI to take photos from other students’ Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children.

The police report makes clear that the images were created with a web-based “nudify” or “undress” app, which automatically and instantly edits photos of women to make them appear naked. The students who used the app to create naked images of other students told police they discovered the app on TikTok and posted some of them on Snapchat or showed them to other students at the lunch table at school. A student “reportedly admitted to making the photos,” the police report says. “[Redacted] went on to tell his friends that he found an app on TikTok for ‘naked AI.’ He then went onto [sic] Safari app and gave them a step by step of how it was done.

This is not just a problem in the United States:

AI-generated naked child images shock Spanish town of Almendralejo

A sleepy town in southern Spain is in shock after it emerged that AI-generated naked images of young local girls had been circulating on social media without their knowledge.

This is another example of how quickly technology is outpacing our laws and the ability of international law enforcement to address new techno-crimes we’ve never seen before.


Meta Abandons Hacking Victims, Draining Law Enforcement Resources, Officials Say

(Mar. 6, 2024)

Many of you know I’m not a fan of Meta, Google, or any company whose business model focuses on collecting and monetizing personal information.

Meta has experienced several data breaches and doesn’t put much emphasis on privacy or security.

But now we are seeing a pushback from law enforcement, rightly stating that it should be Meta’s responsibility to help customers whose accounts have been hacked:

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James says in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’s office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

Meta and other similar companies won’t willingly change their behavior soon because they are still making a lot of money.

That is why you won’t find me on any of their platforms.

The additional marketing visibility is not worth the potential revenue, and supporting these platforms is against my values.

If you are in law enforcement or investigations, consider whether the benefits you receive from these platforms are worth the risks.


Researchers use fake charging station WiFi to hack into and steal your Tesla

(Mar. 10, 2024)

Do you know if your car is hackable?

Are there any Tesla owners out there?

If you own a Tesla, you might want to be extra careful logging into the WiFi networks at Tesla charging stations.

Security researchers Tommy Mysk and Talal Haj Bakry of Mysk Inc. published a YouTube video explaining how easy it can be for hackers to run off with your car using a clever social engineering trick.

Here’s how it works.

Many Tesla charging stations — of which there are over 50,000 in the world — offer a WiFi network typically called “Tesla Guest” that Tesla owners can log into and use while they wait for their car to charge, according to Mysk’s video.

Using a device called a Flipper Zero — a simple $169 hacking tool — the researchers created their own “Tesla Guest” WiFi network. When a victim tries to access the network, they are taken to a fake Tesla login page created by the hackers, who then steal their username, password, and two-factor authentication code directly from the duplicate site.

Once the hackers have stolen the credentials to the owner’s Tesla account, they can use it to log into the real Tesla app, but they have to do it quickly before the 2FA code expires, Mysk explains in the video.

One of Tesla vehicles’ unique features is that owners can use their phones as a digital key to unlock their car without the need for a physical key card.

Once logged in to the app with the owner’s credentials, the researchers set up a new phone key while staying a few feet away from the parked car.

The hackers wouldn’t even need to steal the car right then and there; they could track the Tesla’s location from the app and go steal it later.

I’ve seen examples where the Flipper Zero device referenced in the article can also be used to clone keyless entry cards, open garage doors, or ring an intelligent doorbell from a distance. The device can supposedly read some RFID cards and codes from several vehicle key fobs.

For more information about the Flipper Zero device:


Police must return phones after 175 million passcode guesses, judge says

(Jan. 5, 2024)

Technology makes it more challenging for law enforcement and all investigators to extract digital evidence from devices.

Do we need to draw a line to define what constitutes where a “reasonable” effort must end?

Here is an interesting example where password-protected phones were seized in an alleged pedophile case:

The police seized the phones in October 2022 with a warrant obtained based on information about a Google account user uploading images of child pornography. The contents of the three phones were all protected by complex, alpha-numeric passcodes.

Ontario Superior Court Justice Ian Carter heard that police investigators tried about 175 million passcodes in an effort to break into the phones during the past year.

The problem, the judge was told, is that more than 44 nonillion potential passcodes exist for each phone.

To be more precise, the judge said, there are 44,012,666,865,176,569,775,543,212,890,625 potential alpha-numeric passcodes for each phone.

It means, Carter said, that even though 175 million passcodes were attempted, those efforts represented “an infinitesimal number” of potential answers.

In his ruling, Carter said the court had to balance the property rights of an individual against the state’s legitimate interest in preserving evidence in an investigation. The phones, he said, have no evidentiary value unless the police succeed in finding the right passcodes.

On the other hand, should investigators be given a time limit by which they need to complete a digital forensics analysis in any case and then be forced to return the mobile device, computer, car, smart home assistant, or whatever device has been seized due to the potential existence of digital evidence?

This is another example where our laws and case precedents have yet to keep up with technology.

What do you think?


Cyberattack forces Canada’s financial intelligence agency to take systems offline

(Mar. 5, 2024)

Combining the French data breach with this one (and the others mentioned in the article), 2024 has not been a good year for government cybersecurity.

Canada’s financial intelligence agency FINTRAC has announced pulling its corporate systems offline due to a cyber incident that struck over the weekend.

FINTRAC, the Financial Transactions and Reports Analysis Centre of Canada, is the Ottawa-based government body founded to detect and investigate money laundering and similar crimes.

It follows what was described as an “alarming” cyberattack targeting the Royal Canadian Mounted Police (RCMP) late last month, although again the nature of the attack was not disclosed.

Earlier this year, Canada’s foreign ministry discovered “malicious cyber activity” on its network that allowed hackers access to personal information. It is not known whether this was a criminal or state-sponsored breach.

That came after a separate incident in which data on current and former members of the country’s armed forces and the RCMP was compromised after a contractor providing relocation services for government personnel was hacked.

Suppose you combine the significant Microsoft and Hewlett-Packard breaches with the earlier article about cybersecurity professionals moving to cybercrime. In that case, I can’t wait to see what the rest of the year will bring.


Police warn of thieves using Wi-Fi jamming tech to disarm cameras, alarms

(Mar. 4, 2024)

The various components for many home security systems are now connected via Wi-Fi instead of having to run wiring throughout a home as in years past.

However, the techno-criminals are now evolving:

Authorities with the Los Angeles Police Department are warning residents in Los Angeles’ Wilshire-area neighborhoods of a series of burglaries involving wifi-jamming technology that can disarm surveillance cameras and alarms using a wireless signal.

The article includes several tips to improve the security of your home, but one of them is:

Consider moving to an alarm system that is hard wired rather than running on a wireless signal.

I’ll be speaking at the 35th Annual Global Fraud Conference in June. The title of my presentation is “What Will You Do When Your Digital Footprint Helps the Shadows to Come After You?”

Operational security, or “OPSEC,” is something that many investigators don’t think about, but come and listen to my presentation and see if I can change your mind about whether you should pay more attention to your own OPSEC.


A Vending Machine Error Revealed Secret Face Recognition Tech

(Feb. 24, 2024)

A student investigation at the University of Waterloo uncovered a system that scanned countless undergrads without consent.

How many of you would suspect a vending machine would have built-in facial recognition?

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting face recognition data without their consent.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

The use of technology to collect data, especially biometric data, should be based on transparency and trust.

If there is a disconnect or a failure to inform targets of this technology, companies and the government should not be surprised when they receive an adverse reaction.

This is another example of why we need laws to regulate this type of biometric data collection technology.

Surveillance capitalism is obviously out of control and needs to be limited.


Your fingerprints can be recreated from the sounds made when you swipe on a touchscreen — Researchers new side channel attack can reproduce partial fingerprints to enable attacks

(Feb. 19, 2024)

Do you use Touch-ID or any other security application that uses your fingerprint to authenticate your identity?

An interesting new attack on biometric security has been outlined by a group of researchers from China and the US. PrintListener: Uncovering the Vulnerability of Fingerprint Authentication via the Finger Friction Sound [PDF] proposes a side-channel attack on the sophisticated Automatic Fingerprint Identification System (AFIS). The attack leverages the sound characteristics of a user’s finger swiping on a touchscreen to extract fingerprint pattern features. Following tests, the researchers assert that they can successfully attack “up to 27.9% of partial fingerprints and 9.3% of complete fingerprints within five attempts at the highest security FAR [False Acceptance Rate] setting of 0.01%.” This is claimed to be the first work that leverages swiping sounds to infer fingerprint information.

Biometric fingerprint security is widespread and widely trusted. If things continue as they are, it is thought that the fingerprint authentication market will be worth nearly $100 billion by 2032. However, organizations and people have become increasingly aware that attackers might want to steal their fingerprints, so some have started to be careful about keeping their fingerprints out of sight, and become sensitive to photos showing their hand details.

I’m not sure that this technology is advanced enough to pose much of a threat in real-world applications, but the fact that researchers continue to point out the potential for abuse is not surprising.

Is using your fingerprint convenient?


Just realize that the techno-criminals also read this research, and it won’t be long before they find a way to use it to bypass your security.


Senator asks FTC to investigate automakers’ data privacy practices

(Feb. 28, 2024)

Have you ever considered whether your car might be collecting data about you that the manufacturer could sell?

Calling automakers’ responses to his demand for answers “evasive and vague,” Sen. Edward Markey (D-MA) on Wednesday called on Federal Trade Commission (FTC) Chair Lina Khan to investigate the car industry’s data privacy practices.

In a letter to Khan, Markey said in December, he asked 14 major car manufacturers to offer transparency on how they implement and enforce privacy protections in their vehicles. Markey told Khan that the answers he received were far from clear and even prevaricating.

Most automakers surveyed acknowledged they only provide the right to delete data to consumers in states where the automaker is legally required to do so, the senator said. Markey also told Khan that the manufacturers “largely failed to answer whether they collect more data than is needed for the service provided, whether a consumer loses certain vehicle functionality by refusing to consent to the data collection, or whether the manufacturers have suffered a cyberattack in the last ten years.

Markey urged Khan to act now, saying the industry has been operating with little or no oversight for years even as stalkers have been able to exploit location data to harass victims and many Americans have increasingly demanded answers on how car manufacturers treat and monetize their private data.

Once again, we have an example of data collection that is not transparent or gives consumers a clear-cut ability to “opt out” of these types of data collection practices.

On the other hand, if the federal government in the U.S. has yet to enact a comprehensive personal privacy law, why would we expect companies to respect their customers’ right to privacy?

From an investigative perspective, could the data collected by a suspect’s vehicle be valuable as evidence?

I think we are reaching a point where law enforcement and investigative professionals may be faced with the choice of whether accessing this type of personal information is ethical, even if it might be legal under current laws.


AI Used to Resurrect Dead Dictator to Sway Election

(Feb. 13, 2024)

There will be many important elections this year, and we’ve already seen many examples of deepfake technology being used to influence outcomes.

However, here is an imaginative example that I hadn’t considered:

An Indonesian political party used generative AI to “resurrect” one of the most violent political figures in the nation’s history in a bizarre, deepfaked endorsement message, CNN reports — the latest, and possibly strangest, use of generative AI in the world of politics, elections, and information.

The figure pictured in the deepfake — first shared to X-formerly-Twitter on January 6 — is the former Indonesian dictator Suharto, whose US-backed New Order Regime is estimated to have killed anywhere between 500,000 and about a million Indonesians. Suharto’s brutal regime lasted over three decades, until mass unrest caused him to formally resign in 1998. He died in 2008.

Be thoughtful about any photos or videos you see this year that might influence your vote.

They might be real, but they might be faked.

Choose wisely.


The Techno-Crime Newsletter is a free monthly newsletter providing information and opinions about techno-crimes, cybersecurity tools and techniques, privacy, and operational security for investigators. To subscribe or to read past issues, see The Techno-Crime Newsletter Archive web page.

Please feel free to forward this newsletter to anyone who will find the information interesting or useful. You also have our permission to reprint The Techno-Crime Newsletter, as long the entire newsletter is reprinted.

Walt Manning is an investigations futurist who researches how technology is transforming crime and how governments, legal systems, law enforcement, and investigations will need to evolve to meet these new challenges. Walt started his career in law enforcement with the Dallas Police Department and then went on to manage e-discovery and digital forensics services for major criminal and civil litigation matters worldwide. He is the author of the thought-provoking book Techno-Crimes and the Evolution of Investigations, where he explains why technology will force investigations to evolve. Walt is an internationally recognized speaker and author known for his ability to identify current and impending threats from technology and advise his clients and audiences about ways to minimize their risk. In addition to many published articles, he has been interviewed and widely quoted in the media as an expert on topics related to technology crime and investigations.

Copyright © 2024 by The Techno-Crime Institute Ltd.

If you are not currently subscribed to our mailing list, and would like to receive The Techno-Crime Newsletter in the future, fill out the form below…

SubscribeBuilt with ConvertKit