The Techno-Crime Newsletter 10/23/2023

Compiled by Walt Manning
CEO, Techno-Crime Institute

This newsletter is distributed to everyone on our mailing list and provides links and insights regarding techno-crimes, investigations, security, and privacy.

Contents in this issue:


  1. Deepfake Audio Is a Political Nightmare
  2. Deepfake Porn Is Out of Control
  3. Cops Can Now Fly Drones From Anywhere In The World Using Just A Web Browser
  4. A.I. Chatbots Can Guess Your Personal Information From What You Type
  5. 2023 is already the worst year for hacks—and we’re not out yet
  6. Data never dies: The immortal battle of data privacy
  7. New York Proposes Background Checks For 3-D Printers: Latest Crackdown Effort On ‘Ghost Guns’
  8. Many firms aren’t reporting breaches to the proper authorities
  9. Predictive Policing Software Terrible at Predicting Crimes
  10. GenAI in productivity apps: What could possibly go wrong?


Deepfake Audio Is a Political Nightmare

(Oct. 9, 2023)

We have already seen examples of deepfake technology used to influence elections, but more cases appear in the news every day.

The linked article references an audio file of a U.K. candidate verbally abusing a staff member recently posted on X (formerly known as Twitter).

However, the technology to determine whether an image, audio file, or video is real or fake cannot quickly identify deepfakes.

I have said many times that a lot of damage can be done between the time a deepfake is posted and when it is proven to be false, and we’ll see more of this in the future.

From the linked article:

“Audio deepfakes are emerging as a major risk to the democratic process, as the U.K.—and more than 50 other countries—move toward elections in 2024. Manipulating audio content is becoming cheaper and easier, while fact-checkers say it’s difficult to quickly and definitively identify a recording as fake. These recordings could spend hours or days floating around social media before they’re debunked, and researchers worry that this type of deepfake content could create a political atmosphere in which voters don’t know what information they can trust.


“If you are listening to a sound bite or a video online with this seed of doubt about whether this is genuinely real, it risks undermining the foundation of how debate happens and people’s capacity to feel informed,” says Kate Dommett, professor of digital politics at Sheffield University.”

Another recent incident likely influenced an election in Slovakia:

“Just two days before Slovakia’s elections, an audio recording was posted to Facebook. On it were two voices: allegedly, Michal Šimečka, who leads the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper Denník N. They appeared to be discussing how to rig the election, partly by buying votes from the country’s marginalized Roma minority.


But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk.”

We now live in an era where technology deepfakes and data poisoning can seriously threaten governments and the confidence in politicians everywhere.

At a time when public approval ratings for politicians are already very low (and perhaps for good reason), this technology will only cause more damage.

Before you react to any information, regardless of the format and the source, take the extra time to be sure you’re seeing the truth.

In the past, people used to say that “seeing is believing.”

But deepfake technology has made that simple view of the world obsolete.


Deepfake Porn Is Out of Control

(Oct. 16, 2023)

When we first started seeing deepfake technology, one of the first areas we saw it was in pornographic videos, where the heads of people in the videos were replaced with those of celebrities.

But now, the technology has exploded and targets more than just public figures.

New research gives us insight into how serious this problem has become:

“A new analysis of nonconsensual deepfake porn videos, conducted by an independent researcher and shared with WIRED, shows how pervasive the videos have become. At least 244,625 videos have been uploaded to the top 35 websites set up either exclusively or partially to host deepfake porn videos in the past seven years, according to the researcher, who requested anonymity to avoid being targeted online.


Over the first nine months of this year, 113,000 videos were uploaded to the websites—a 54 percent increase on the 73,000 videos uploaded in all of 2022. By the end of this year, the analysis forecasts, more videos will have been produced in 2023 than the total number of every other year combined.


“This is something that targets everyday people, everyday high school students, everyday adults—it’s become a daily occurrence,” says Sophie Maddocks, who conducts research on digital rights and cyber-sexual violence at the University of Pennsylvania.


“There has been significant growth in the availability of A.I. tools for creating deepfake nonconsensual pornographic imagery, and an increase in demand for this type of content on pornography platforms and illicit online networks,” says Asher Flynn, an associate professor at Monash University, Australia, who focuses on A.I. and technology-facilitated abuse. This is only likely to increase with new generative A.I. tools.”


The proliferation of these deepfake apps combined with a greater reliance on digital communications in the Covid-19 era and a “failure of laws and policies to keep pace” has created a “perfect storm,” Flynn says.”

For many years, I’ve been talking about how our laws have not kept pace with the technology that continues to grow exponentially.

It’s time for governments and politicians to take responsibility and enact laws to address these abuses instead of wasting time with the unproductive and dysfunctional polarity preventing any beneficial legislation from even being considered.


Cops Can Now Fly Drones From Anywhere In The World Using Just A Web Browser

(Oct. 18, 2023)

What do you think about the idea of police using drones as first responders or being able to monitor any location remotely?

There are obvious benefits to law enforcement’s use of this technology.

Police staffing to respond adequately to calls for service and preventive patrol is a constant struggle.

Having the capability to use a drone to respond to certain types of calls can improve response times and could even save lives.

With the addition of thermal cameras, think about the ability to quickly find a lost child or an aging adult with Alzheimer’s who wanders away from their residence.

The article also mentions the capability to pinpoint the specific location of a fire to provide real-time video to fire departments, giving valuable information about not only the location of a fire but perhaps even its size and intensity.

Paladin Drones sells custom drones designed for law enforcement and an add-on module that can be attached to many existing drones to convert them for police use.

The firm’s founder is very aware of the potential controversy and privacy concerns regarding law enforcement’s use of drones and has made a laudable attempt to address these issues.

If used responsibly, all use of the Paladin drones is logged in detail, and many police departments make this data available to the public.

The Paladin software is also designed by default to point its drone’s camera at the horizon to minimize the amount and type of data that could be collected along its flight path. However, the officers controlling the drone can take manual control of the device at any time.

These drones can also be integrated with license plate readers and gunshot detection systems to provide the police with even more information.

As you might imagine, privacy advocates have issues with this capability:

“At least 1,400 police departments across the U.S. are currently using drones, according to data collected by the Electronic Frontier Foundation (EFF), and analysts at Teal Group predict the global civil government market, which includes public safety and border security, is going to hit nearly $140 billion over this decade.”


“I don’t think the American public really wants to live in a world with surveillance drones buzzing over their heads all day, capturing massive amounts of data and treating the entire population as a target,” said Dave Maass, the director of investigations at the EFF. He said Paladin’s system is especially concerning because it can combine a number of technologies, including gunshot detection, that have “a long rap sheet of problems and biases.” Previous reports have found that gunshot detection is often deployed in majority Black and Latino areas in the U.S. and that it can misclassify sounds such as fireworks as a firearm going off, potentially leading to wrongful arrests.”

Here’s another example of technology being used for law enforcement raising questions about privacy and surveillance.

The real question is one of trust and how law enforcement controls these potential surveillance technologies.

If used responsibly, technology can significantly increase productivity and safety while helping solve crimes faster.

I’m glad to see examples like this where all parties try to address privacy concerns while giving our police additional necessary tools.


A.I. Chatbots Can Guess Your Personal Information From What You Type

(Oct. 17, 2023)

As more people experiment or regularly use A.I. platforms like ChatCPT or Claude, we see some unintended consequences.

Chatbots are increasingly used for customer service and the first level of tech support.

Users can also chat with A.I. to ask questions or to brainstorm new ideas.

But with this increased use of A.I., new research shows the possibility of the A.I. collating the chatbot data with other information to make “educated” guesses about the identity or location of the user:

“The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It’s not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zurich in Switzerland who led the research. “This is very, very problematic.”


Vechev and his team found that the large language models that power advanced chatbots can accurately infer an alarming amount of personal information about users—including their race, location, occupation, and more—from conversations that appear innocuous.


Vechev says that scammers could use chatbots’ ability to guess sensitive information about a person to harvest sensitive data from unsuspecting users. He adds that the same underlying capability could portend a new era of advertising, in which companies use information gathered from chatbots to build detailed profiles of users.”

We’re also seeing trends where users create personal A.I. or have “conversations” with the system. Some users are treating A.I. like a therapist or a dating consultant.

Most people know that if you post something on the Internet or especially on social media, the information is now public and something anyone can see.

But interacting with A.I. is still relatively new, and even though most A.I. platforms caution users not to input personal or sensitive information, we’re all still getting used to this new and fascinating technology.

Several months ago, I saw an article titled “Don’t Tell ChatGPT Anything You Wouldn’t Want to See on a Billboard.”

That’s good advice and something you should seriously consider.


2023 is already the worst year for hacks—and we’re not out yet

(Oct. 13, 2023)

Many respected cybersecurity experts have said, “It’s not a matter of IF you will be hacked, but only a matter of WHEN.”

Bruce Schneier, a cybersecurity and cryptography expert that I follow, said it this way:

“I am regularly asked what the average Internet user can do to ensure his security. My first answer is usually ‘Nothing; you’re screwed.’”

I agree, and the statistics back up this position:

“Cyberattacks are becoming more prevalent in 2023—and it’s no longer a matter of whether this year will record a record number of data breaches, it’s more a question of how high that number will be.


As of the end of September, corporations had reported 2,116 data compromises for the year, according to the Identity Theft Resource Center (ITRC). That’s already higher than the previous annual record of 1,862, set in 2021.


Financial services was the most-attacked sector, topping healthcare for the first time since Q2 2022.”

So, if it is inevitable that you will be hacked, you’ll need to pay even more attention to cybersecurity and privacy measures.

But your focus must now be on resilience, which means you’ll need to have a plan to recover after you’re hacked.

Data backups, anti-malware protection, converting from passwords to passkeys, using VPNs, and other protection measures are more important than ever.

But awareness of techno-crime trends is critical.

If you don’t know how technologies can be used against you, how can you develop a strategy to reduce your risk?

As a subscriber to this newsletter, I hope you learn about new techno-crime and privacy risks and receive helpful advice.

But do you know someone else who might also need this information?

Please forward them a copy of this newsletter and suggest they join our mailing list to receive these updates.

The more we spread this awareness, the more we improve our chances of combatting techno-crime.


Data never dies: The immortal battle of data privacy

(Oct. 2, 2023)

When a person dies, what happens to their data?

Back in the “old days,” when someone might have a home computer, that question was much easier to answer: Whoever has the computer that stores the data.

However, we live in a different world today.

We each have multiple devices and use various cloud services for email, social media, data storage, financial accounting, and more.

This problem goes far beyond knowing the login credentials for all these services.

If you read the Terms of Service for many cloud providers, you’ll find that after the death of an account holder, ownership of their data transfers to the provider.

Surviving family members and even the deceased’s estate may have few legal options.

This can also cause unexpected problems for businesses.

“When using a cloud-based vendor, many businesses think that they are retaining ownership of their data in these third-party services agreements — but this is often not the case,” Jon Roskill wrote in Forbes. End-user licenses often have wording that shifts data ownership away from the consumer and passes it along to the vendor.


Data ownership is a very slippery slope. Businesses are frequently sold; when that happens, the data is business collateral. It doesn’t matter if the data was generated by customers; it becomes the property of the new owners.


Your loved one will die. Their digital assets will live on. Without the ability to monitor accounts or put surroundings around their personal data, a dead person’s PII becomes an appealing target for identity thieves and account hijackers. Overall, attacks due to account takeovers increased by 131% in 2022, according to research from Sift.


“The nature of account takeover attacks also makes them easy to scale — having access to one set of compromised credentials often opens the door to multiple accounts, giving fraudsters several sources to steal from,” a Sift blog post stated.


Users need to create an inheritance plan. Maybe no one will physically inherit your digital assets, but someone will likely need to access accounts. Within the work environment, this is especially true for business continuity. Passwords, user names and MFA keys must be available.”

As the article recommends, you should develop an inheritance plan where everything is documented in detail.

It’s not just a matter of your survivors having the login credentials needed to access data (if they still can do so legally).

Also include where vital documents are stored, and even create a list of things that will need immediate attention, including people and companies to notify, bills that must be paid, accounts to be closed, and more.

In my case, I need to leave instructions regarding what will happen to my own businesses and whether they will continue or need to be legally closed.

There can be significant tax implications and potential penalties if this is not done promptly.

That reminds me…I need to update my inheritance plan.

You should probably work on yours, too.


New York Proposes Background Checks For 3-D Printers: Latest Crackdown Effort On ‘Ghost Guns’

(Oct. 19, 2023)

Did you know that weapons can be manufactured with a 3D printer?

I’ve been talking about the possibilities for this technology to be used to print guns and drugs for several years, and now it appears that the problem has grown enough to draw the attention of legislators in New York.

“Guns made by 3-D printers are a type of ghost gun—guns that are assembled from different parts and are untraceable, without serial numbers. Such firearms have been the target of increasing regulations under the Biden administration, which last year passed a series of new laws to ban the manufacturing of ghost guns and reclassify the kits sold to make guns at home as firearms themselves.


“Technology has made it possible for anyone with a few hundred dollars to create dangerous weapons and firearms in the comfort of their own home,” Bragg said.


25,785. That’s how many ghost guns the U.S. Department of Justice recovered in domestic seizures in 2022.”

Sometimes I hate to be right.


Many firms aren’t reporting breaches to the proper authorities

(Sep. 26, 2023)

Do we really know how many data breaches have occurred?

Probably not, because new research shows that the actual numbers are likely to be significantly underreported:

“Research conducted by Keeper Security found that nearly half (48%) of the I.T. and security leaders it surveyed that have experienced a cybersecurity incident did not report it to the appropriate authorities.


What’s more, 41% of such attacks were not event reported to leadership within the company itself.


A further 75% of those that admitted to not reporting an incident said they felt guilty, with most (43%) citing a “fear of repercussions” as the reason for keeping tight-lipped. Damage to the firm’s reputation was a main consideration.”

To make the problem even worse, Chief Information Security Officers (CISOs) are having their budgets slashed or reduced for the next budget cycle:

“After years of rapid growth, cybersecurity spending is starting to taper among enterprises, with a 65% fall in budget growth in the 2022-2023 budget cycle as global instability and inflationary pressures start to pinch, according to a study by IANS Research.”

So, we’re putting our cybersecurity professionals at a disadvantage when data breaches and other hacking attacks are increasing.

But, on the other hand, I think that failure to comply with data breach reporting requirements should be a crime.

Do we want to continue to have increasing data breaches or not?

I may be missing something, but this doesn’t make sense to me.


Predictive Policing Software Terrible At Predicting Crimes

(Oct. 20, 2023)

Software systems for predictive policing have been around for years, with a wide range of results.

Is it reasonable to expect A.I. technology will make these systems better?

Perhaps, but not today:

“Geolitica, known as PredPol until a 2021 rebrand, produces software that ingests data from crime incident reports and produces daily predictions on where and when crimes are most likely to occur.


We examined 23,631 predictions generated by Geolitica between February 25 and December 18, 2018, for the Plainfield Police Department (P.D.). Each prediction we analyzed from the company’s algorithm indicated that one type of crime was likely to occur in a location not patrolled by Plainfield P.D. In the end, the success rate was less than half a percent. Fewer than 100 of the predictions lined up with a crime in the predicted category, that was also later reported to police.


Diving deeper, we looked at predictions specifically for robberies or aggravated assaults that were likely to occur in Plainfield and found a similarly low success rate: 0.6 percent. The pattern was even worse when we looked at burglary predictions, which had a success rate of 0.1 percent.”

The Plainfield P.D. eventually stopped using the software because it couldn’t produce reliable results.

There are probably other software vendors who have a better product.

Remember that using A.I. to predict crime depends on the dataset included in the large learning model used for analysis. If this data is based on historically reported crimes, that doesn’t necessarily correlate to what could happen in the future.

But we need to be very careful with A.I. because we’re still in the very early stages of this technology. In very complex situations such as the causality factors of crime, how could anyone be confident enough to rely on crime predictions produced by A.I.?

I don’t believe the current technology is mature enough for wide use by law enforcement, but if you know of any predictive policing platforms with a proven record of success, I’d love to hear about them.


GenAI in productivity apps: What could possibly go wrong?

(Sep. 5, 2023)

Your employees are using or will be using A.I. regardless of whether you have a policy attempting to limit or control the use of this technology.

Lots of companies and cloud providers are adding A.I. capabilities to their apps.

But as with most things related to A.I., there are risks:

“Microsoft 365, Google Workspace, Adobe Photoshop, Slack, and Grammarly are among the many popular productivity software tools that now offer a generative A.I. component. (Some are still in private beta testing.) Employees already know and use these tools every day, so when the vendors add generative A.I. features, it immediately makes the new technology widely accessible.


While generative A.I. may be a groundbreaking technology that brings a new set of risks, the traditional SaaS playbook can work when it comes to getting it under control: educating employees on the risks and benefits, setting up security guardrails to prevent employees from accessing malicious apps or sites or accidentally sharing sensitive data, and offering corporate-approved technologies that follow security best practices.


But first, let’s talk about what can go wrong.”

Yes, A.I. can make employees more productive.

I use A.I. on various platforms, but you need to be careful.

Can you rely on the data produced by A.I. to be accurate 100% of the time?


Is there a real possibility that an employee will provide an A.I. system with information that the A.I. shouldn’t have access to?


Make sure that you and your employees are aware of the risks and receive adequate training on what is appropriate and what is not.

I’m not sure that any organization could write and enforce limits on the use of A.I., but you can’t assume that “everything will work out” and not address the potential risks from the uncontrolled use of A.I.

As the article’s author wisely suggests, “But first, let’s talk about what can go wrong.”


The Techno-Crime Newsletter is a free monthly newsletter providing information and opinions about techno-crimes, cybersecurity tools and techniques, privacy, and operational security for investigators. To subscribe or to read past issues, see The Techno-Crime Newsletter Archive web page.

Please feel free to forward this newsletter to anyone who will find the information interesting or useful. You also have our permission to reprint The Techno-Crime Newsletter, as long the entire newsletter is reprinted.


Walt Manning is an investigations futurist who researches how technology is transforming crime and how governments, legal systems, law enforcement, and investigations will need to evolve to meet these new challenges. Walt started his career in law enforcement with the Dallas Police Department and then went on to manage e-discovery and digital forensics services for major criminal and civil litigation matters worldwide. He is the author of the thought-provoking book Techno-Crimes and the Evolution of Investigations, where he explains why technology will force investigations to evolve. Walt is an internationally recognized speaker and author known for his ability to identify current and impending threats from technology and advise his clients and audiences about ways to minimize their risk. In addition to many published articles, he has been interviewed and widely quoted in the media as an expert on topics related to technology crime and investigations.

Copyright © 2023 by The Techno-Crime Institute Ltd.

If you are not currently subscribed to our mailing list, and would like to receive The Techno-Crime Newsletter in the future, fill out the form below...