The Techno-Crime Newsletter 08/08/2023

Compiled by Walt Manning
CEO, Techno-Crime Institute

This newsletter is distributed to everyone on our mailing list and provides links and insights regarding techno-crimes, investigations, security, and privacy.

Contents in this issue:


  1. See your identity pieced together from stolen data
  2. Did you know that assassins advertise on the darknets?
  3. How Public Cameras Recognize and Track You
  4. Police Are Requesting Self-Driving Car Footage For Video Evidence
  5. ‘FraudGPT’ Malicious Chatbot Now for Sale on Dark Web
  6. Researchers discover 60,000 ‘modded’ Android apps carrying adware
  7. Cybercriminals can break voice authentication with 99% success rate
  8. Satellites Are Rife With Basic Security Flaws
  9. Peloton Bugs Expose Enterprise Networks to IoT Attacks
  10. 41.4 million affected by healthcare data breaches this year, nearing ’22 totals
  11. Scheduled Speaking Engagements


See your identity pieced together from stolen data

(May 17, 2023)

Do you know whether your personal data has been stolen from a data breach?

One of the great services that can help is “Have I Been Pwned” at, created by cybersecurity expert Troy Hunt in 2013.

You can visit the service anytime and input any of your email addresses to discover which data breaches might have exposed your data.

But the Australian Broadcasting Corporation just published an excellent interactive way for you to see how this data from various breaches can be put together to create a “visual summary of the potential scale of the leaked information out there about you.”

The demonstration will help you understand how something known as “the mosaic effect” can increase the risks we all face online.

Everyone should see how this “mosaic effect” can be used to add more pieces to the puzzle of who you are and what you do.

I encourage everyone to look at your particular puzzle.

It may give you a different perspective on privacy and techno-crime risks.


Did you know that assassins advertise on the darknets?

(Jul. 2, 2023)

Many people don’t know about darknets and think anyone using them is either a criminal or a terrorist.

But that’s not true. There are legitimate uses for darknets.

Many people use this technology to protect their identity and physical location when they live in oppressed countries or places where Internet use is highly monitored.

Unfortunately, this is true in many countries today, even those that claim to be free and open democracies.

However, it is also true that criminal activity does occur on these darknets.

For example, you can find several listings that offer assassins, or “hitmen,” for hire.

Would anyone really hire a hitman from the darknet?

From the first linked article:

“John Michael Musbach – a 34-year-old resident of Haddonfield, New Jersey – was reportedly sentenced to six and a half years in jail for hiring a hitman to commit a murder.


He transferred $20,000 worth of BTC to the “executor” to kill a kid that was about to testify against him in a child-pornography case. Moreover, the youngster was the victim.”


“Scott Quinn Berkett – a 26-year-old resident of Beverly Hills, Los Angeles – pleaded guilty last summer to sending $13K in bitcoin to a hitman on the Dark Web whose task was to kill Berkett’s ex-girlfriend.


Undercover agents discovered the crime and contacted the offender introducing themselves as the killer. Berkett confirmed his intentions and even sent additional money to the detectives.”

And from a separate press release from the United States Attorney’s Office, Eastern District of California (also in July):

“Kristy Lynn Felkins, 38, of Fallon, Nevada, was sentenced Thursday to five years in prison for a murder-for-hire plot, U.S. Attorney Phillip A. Talbert announced.


According to court documents, Felkins sent 12 bitcoin (valued at approximately $5,000 at the time) to a dark web hitman website known as Besa Mafia to have her ex-husband murdered. From February to May 2016, Felkins regularly communicated with the administrator of the site to pay and arrange for the murder of her ex-husband. Felkins gave the administrator the specific location of her husband in an attempt to have him murdered.”

Over the years, I’ve had people tell me that crime on the darknets really doesn’t exist.

But articles about arrests prove that this view is naïve.

Could you investigate a case involving the darknet?

If you want to learn more about darknets, how they work, and what types of criminal activity occur on these networks, we will announce a new online Darknet Mastermind course soon.

If you are interested in being part of the first mastermind group, you will receive an advance notice with all the details, plus a special one-time-only discount, even before we announce the mastermind’s availability to everyone on the mailing list and our website.

The number of spaces in this first group will be limited, so don’t wait!

Send your response directly to:


How Public Cameras Recognize and Track You

(Jul. 6, 2023)

In addition to giving you recent news about techno-crimes, we also review many resources related to how technology impacts your personal privacy.

Although the linked video was first released in 2022, WIRED’s very informative 12-minute video is even more critical for you to know about today.

The video talks about how the explosion of video surveillance cameras combined with other technologies can track you.

“WIRED spoke with several experts about the explosion of surveillance technology, how police use it, and what the dangers might be. As tech advances, street cameras can now employ facial recognition and even connect to the internet. What does this mean for the future of privacy?”

As investigators, we need to carefully consider not just the legal aspects of the incredible surveillance capabilities of this technology but also the professional ethics of how we use it.

Spending the 12 minutes to watch this is well worth your time.


Police Are Requesting Self-Driving Car Footage For Video Evidence

(Jun. 29, 2023)

As autonomous vehicle technology advances, it causes a lot of new questions and issues regarding security and privacy.

Waymo vehicles can have up to 29 different cameras to help the artificial intelligence that operates their vehicles to “enhance safety and to relieve the driver from fully operating the vehicle.”

The additional technology in these vehicles, including multiple lidar and radars, has significantly improved the capabilities and safety of their vehicles.

But now we see the video footage from these cars being requested by law enforcement.

Since these cars travel widely throughout cities where they have been approved to use, the vehicle cameras might collect video images that could provide valuable evidence in a criminal investigation.

At the same time, there have been cases where video footage from autonomous vehicles has helped prove the innocence of a suspect.

But what about the privacy of other vehicles and pedestrians that might be collected in these videos? Does anyone have an expectation of privacy when you’re in range of a self-driving car’s cameras?

From the linked article:

“In December 2021, San Francisco police were working to solve the murder of an Uber driver. As detectives reviewed local surveillance footage, they zeroed in on a gray Dodge Charger they believed the shooter was driving. They also noticed a fleet of Waymo’s self-driving cars, covered with cameras and sensors, happened to drive by around the same time.


Recognizing the convenient trove of potential evidence, Sergeant Phillip Gordon drafted a search warrant to Alphabet Inc.’s Waymo, demanding hours of footage that the SUVs had captured the morning the shooting took place. “I believe that there is probable cause that the Waymo vehicles driving around the area have video surveillance of the suspect vehicle, suspects, crime scene, and possibly the victims in this case,” Gordon wrote in the application for the warrant to Google’s sister company. A judge quickly authorized it, and Waymo provided footage.”


“While security cameras are commonplace in American cities, self-driving cars represent a new level of access for law enforcement — and a new method for encroachment on privacy, advocates say. Crisscrossing the city on their routes, self-driving cars capture a wider swath of footage. And it’s easier for law enforcement to turn to one company with a large repository of videos and a dedicated response team than to reach out to all the businesses in a neighborhood with security systems. “We’ve known for a long time that they are essentially surveillance cameras on wheels,” said Chris Gilliard, a fellow at the Social Science Research Council. “We’re supposed to be able to go about our business in our day-to-day lives without being surveilled unless we are suspected of a crime, and each little bit of this technology strips away that ability.””


“Comprehensive privacy legislation, which has languished for years in the U.S., is ultimately the only thing that can thwart overly broad requests from police, experts say.


“With the lack of consumer privacy protections that we have in the U.S. right now, companies are able to collect as much information as humanly possible,” said Matthew Guariglia, a policy analyst at the Electronic Frontier Foundation, adding that police are then able to capitalize on the trove of data.


Police who have obtained footage from self-driving cars say they view it as a tool to be used judiciously — and that the evidence can be used to build cases and exonerate suspects.”

I am not saying that the police did anything wrong by taking advantage of the technologies available, and they complied with legal requirements by obtaining search warrants for this data.

However, one could argue that the data requested in the search warrant might be over-broad since the videos also include information about people unrelated to the specific investigation.

When so much data is available, it can be a slippery slope between using every available technology to investigate a crime and a police surveillance state that can track activities outside the scope of a legal search warrant.

In my book, Techno-Crimes and the Evolution of Investigations, I raise the question of whether the exponential growth of technology will require us to develop new legal systems to ensure that the needs of investigations can be met while protecting individual privacy.

This discussion about how the data collected by autonomous vehicles can be used is another example of our laws failing to keep up with today’s technologies.


‘FraudGPT’ Malicious Chatbot Now for Sale on Dark Web

(Jul. 25, 2023)

Many of the current artificial intelligence platforms, such as ChatGTP and OpenAI, are trying to moderate what types of prompts their A.I. will respond to and how.

We’ve already seen A.I. used to create improved phishing email messages to make them more challenging to detect.

A.I. is also being used to write and improve malware code.

So, we should not be surprised that “FraudGPT” is now being sold by darknet vendors:

“FraudGPT starts at $200 per month and goes up to $1,700 per year, and it’s aimed at helping hackers conduct their nefarious business with the help of A.I. The actor claims to have more than 3,000 confirmed sales and reviews so far for FraudGPT.”


“FraudGPT — which in ads is touted as a “bot without limitations, rules, [and] boundaries” — is sold by a threat actor who claims to be a verified vendor on various underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus.”


“Both WormGPT and FraudGPT can help attackers use A.I. to their advantage when crafting phishing campaigns, generating messages aimed at pressuring victims into falling for business email compromise (BEC), and other email-based scams, for starters.


FraudGPT also can help threat actors do a slew of other bad things, such as: writing malicious code; creating undetectable malware; finding non-VBV bins; creating phishing pages; building hacking tools; finding hacking groups, sites, and markets; writing scam pages and letters; finding leaks and vulnerabilities; and learning to code or hack.”

Many of your employees or clients are experimenting with the new A.I. platforms that pop up daily.

But do they know the risks to their proprietary information and privacy that using these platforms might cause?

Here are a few links to articles that might give you a different perspective about the challenges of using A.I.:

Generative AI Has an Intellectual Property Problem

10 Threats That the Use Of AI Poses For Companies And Organizations

How to Avoid Privacy and Data Security Risks Associated with AI

AI and privacy: Everything you need to know about trust and technology

Organizations should consider developing policies to regulate the use of artificial intelligence and train employees to understand the dangers of the unrestricted use of A.I.

If you are using A.I. or if you intend to explore it in the future, learn about the risks first.


Researchers discover 60,000 ‘modded’ Android apps carrying adware

(Jun. 6, 2023)

A significantly higher percentage of new malware has targeted Android devices for many years.

Plus, there are many different versions of the Android operating system that various manufacturers customize, so your device may get updated slower than others.

But there is new research by Bitdefender that found 60,000 Android apps that install “adware” along with the app without the user’s knowledge.

““Upon analysis, the campaign is designed to aggressively push adware to Android devices with the purpose of driving revenue,” researchers wrote. “However, the threat actors involved can easily switch tactics to redirect users to other types of malware, such as banking Trojans to steal credentials and financial information or ransomware.”


Among the apps spreading the adware were free VPN programs, modified online games and cracked utility programs like PDF viewers.”


Most of these apps don’t come from the official Google Play app store, so be very careful if you get your apps from any other source.

Since most employees use personally-owned devices, make them aware of the risks of mobile apps that are not downloaded from approved app stores.


Cybercriminals can break voice authentication with 99% success rate

(Jul. 6, 2023)

Many organizations use voice authentication to verify the identity of employees, and many financial institutions also use this technology to authenticate their clients for remote banking and access to investment accounts.

But researchers from the University of Waterloo have shown that generative A.I. can be used to fool these systems.

From the linked article:

“The Waterloo researchers have developed a method that evades spoofing countermeasures and can fool most voice authentication systems within six attempts. They identified the markers in deepfake audio that betray it is computer-generated and wrote a program that removes these markers, making it indistinguishable from authentic audio.


In a recent test against Amazon Connect’s voice authentication system, they achieved a 10% success rate in one four-second attack, with this rate rising to over 40% in less than thirty seconds. With some of the less sophisticated voice authentication systems they targeted, they achieved a 99% success rate after six attempts.”

Although I haven’t seen any reports where this technology has been successfully used to commit fraud, it’s probably only a matter of time.

Physical and cybersecurity systems will need new and consistent audits to ensure they can deal with new techno-crime threats like this.

Organizations who use voice authentication may need to upgrade their technology to stay ahead of generative A.I.


Satellites Are Rife With Basic Security Flaws

(Jul. 28, 2023)

Thousands of satellites are orbiting the Earth, providing us with communications, imaging, Internet service, weather monitoring, the global positioning system, broadcasting, and scientific research.

But did you ever wonder whether these satellites are secure?

Just like every other device using software or transmitted commands, satellites have vulnerabilities.

Researchers from the Ruhr University Bochum and the Cispa Helmholtz Center for Information Security in Germany published an interesting academic paper, “Space Odyssey: An Experimental Software Security Analysis of Satellites.”

“The satellites inspected by the researchers, according to an academic paper, contain “simple” vulnerabilities in their firmware and show “that little security research from the last decade has reached the space domain.” Among the problems are a lack of protection for who can communicate with the satellite systems and a failure to include encryption. Theoretically, the researchers say, the kinds of issues they discovered could allow an attacker to take control of a satellite and crash it into other objects.”

Imagine the techno-crime possibilities.

Shut down the GPS and impact anyone who depends on GPS for navigation or accurate location identification.

Modify the orbital path of communications or imaging satellites to cause them to fall back to Earth and self-destruct.

If the data transmitted to and from a satellite is not encrypted (and many aren’t), a hacker could intercept and eavesdrop on all communications to and from the satellite.

The security risks from satellites are not a new problem and aren’t limited to the three satellites from the paper cited above:

The vulnerability of satellite communications

The Growing Risk of a Major Satellite Cyber Attack

Insecure satellite Internet is threatening ship and plane safety

Hacking satellites with $300 worth of T.V. gear

We rely on technology to do more for us every day, but it’s almost an impossible challenge to secure every device.

It almost seems as if the world is in a race to develop and deploy every new technology as fast as possible without thinking through the security and privacy risks.

Because technology is evolving exponentially, we’ll need investigators who have the technical expertise to investigate the crimes that will occur because devices (such as satellites) were designed without cybersecurity in mind.

Will you be an investigator with techno-crime investigation skills, or one that is left behind?


Peloton Bugs Expose Enterprise Networks to IoT Attacks

(Jul. 26, 2023)

Could someone break into your home or office network via your Peloton exercise equipment?

Is it possible to remotely spy on users through a fitness machine?

Possibly, according to research from Check Point Software:

“Hacking a Peloton Tread through any of these points could lead to the exposure not only of a user’s personal data, but attackers could also leverage the machine’s connectivity to move laterally to a corporate network to mount a ransomware or other type of high-level attacks, the researchers revealed in a blog post published this week.”


“Researchers had also identified a previous flaw in the Peloton system which could have allowed attackers to remotely spy on victims through an open unauthenticated API. Indeed, its mere existence as an IoT device exposes the home fitness gear to the same vulnerabilities that any Internet-exposed device faces, and the potential risks to users that go along with them.”

Just like we discussed in last month’s newsletter about Amazon being penalized for privacy violations related to its Alexa and Ring products, you need to know the security and privacy risks before purchasing or using any Internet of Things connected device.

If a client calls you with suspicions that their home or office network has been hacked, you’ll need to know every device connected to the network to include in your investigation as a possible entry point.


41.4 million affected by healthcare data breaches this year, nearing ’22 totals

(Jul. 10, 2023)

Medical identity theft is a bigger problem than many may realize.

This is where your medical and insurance data is stolen or purchased by someone to get medical care, prescription drugs, and even use to pay for surgical procedures.

The danger is when the thief’s records become mixed with yours. Their records may show drug or alcohol abuse, different medicinal allergies, and even a different blood type.

We’ve seen cases where victims were denied insurance coverage, given the wrong blood type in a transfusion, and given medicines they were allergic to.

And the numbers are growing.

According to research published by Politico, the number of victims in the first half of 2023 is already approaching the total for all of 2022:

“Health care entities covered by the federal health privacy law HIPAA have reported more than 330 breaches affecting 41.4 million people to HHS’ Office for Civil Rights through Monday, already closing in on 2022’s total of more than 52 million, according to a POLITICO analysis of the most recent Health and Human Services Department data.”

Ransomware attacks on healthcare providers and multiple data breaches where healthcare data is stolen and sold are only part of the problem.

We’ve also seen reports where healthcare organizations have sold or shared healthcare data with Google and Facebook, as also reported by Politico this last April.

The most important goal for any healthcare professional or organization should be the health of their patients and clients – not profiting from patient data.

Protecting personal healthcare data must be more of a priority, with significant consequences and penalties if cybersecurity isn’t given enough resources, and if patient data is stolen or provided to outside parties violating the law.

Anything less is unacceptable.


Scheduled Speaking Engagements

I will be speaking to a private client risk forum on September 12th in Paris, France. The presentation will include A.I. voice cloning and other techno-crime topics.

I’m scheduled to give an all-day training seminar about various aspects of techno-crime investigations on Wednesday, September 20th, for the ACFE Las Vegas Chapter in Las Vegas, Nevada. Contact me if you want more details regarding the specific topics.

I’ll be giving a 4-hour workshop on October 13th for the Central Indiana Chapter of the ACFE in Indianapolis, Indiana. Topics will include A.I., deep fakes, data poisoning, darknets, and how suspects can use technology to hide from you. For more information, go to


The Techno-Crime Newsletter is a free monthly newsletter providing information and opinions about techno-crimes, cybersecurity tools and techniques, privacy, and operational security for investigators. To subscribe or to read past issues, see The Techno-Crime Newsletter Archive web page.

Please feel free to forward this newsletter to anyone who will find the information interesting or useful. You also have our permission to reprint The Techno-Crime Newsletter, as long the entire newsletter is reprinted.


Walt Manning is an investigations futurist who researches how technology is transforming crime and how governments, legal systems, law enforcement, and investigations will need to evolve to meet these new challenges. Walt started his career in law enforcement with the Dallas Police Department and then went on to manage e-discovery and digital forensics services for major criminal and civil litigation matters worldwide. He is the author of the thought-provoking book Techno-Crimes and the Evolution of Investigations, where he explains why technology will force investigations to evolve. Walt is an internationally recognized speaker and author known for his ability to identify current and impending threats from technology and advise his clients and audiences about ways to minimize their risk. In addition to many published articles, he has been interviewed and widely quoted in the media as an expert on topics related to technology crime and investigations.

Copyright © 2023 by The Techno-Crime Institute Ltd.

If you are not currently subscribed to our mailing list, and would like to receive The Techno-Crime Newsletter in the future, fill out the form below...