
Spotlighting Massachusetts Housing Delinquency Activity as Assistance Programs Wind Down
December 4, 2025
Massachusetts Enacts Temporary Shutdown-Related Foreclosure and Eviction Protections
December 8, 2025December 5, 2026
Courts across the country are encountering the earliest wave of AI-generated evidence, and the recent incident in Mendones v. Cushman & Wakefield, Inc. shows how quickly generative tools can challenge long-standing evidentiary assumptions. In that case, California Superior Court Judge Victoria Kolakowski identified a plaintiff-submitted witness video as an AI deepfake after noticing unnatural facial movements, repeated expressions, and inconsistent metadata that did not match the claimed device of origin. Judges interviewed by NBC News reported growing concern that deepfake videos, audio, and documents can erode the reliability of evidence and place new strain on judicial fact-finding. This concerning trend may soon require that the authentication analyses conducted by courts, lawyers, and litigants operate with greater scrutiny.
Judges quoted in the NBC News investigation warned that deepfakes can influence protective-order decisions, property-record authenticity, and basic documentary trustworthiness, which may cause traditional evidentiary presumptions to shift toward heightened verification. State and federal judges are already considering new approaches. Louisiana’s Act 250 now requires attorneys to exercise “reasonable diligence” to determine whether evidence submitted by clients originated from generative AI, and the National Center for State Courts and Thomson Reuters Institute published guidance that encourages judges to question provenance, access, and corroboration when AI involvement is suspected. Although the U.S. Judicial Conference declined to move forward with deepfake-specific evidence amendments during its May session, committee notes indicated that members kept a deepfake rule ‘in the bullpen’ for possible future consideration.
These recent judiciary’s encounters with deepfake evidence, including the Mendones case, may indicate that courts are entering a new period of “heightened scrutiny” where longstanding assumptions about reliability, provenance, and authenticity will face new strain. Judges, attorneys, and technologists are already developing preliminary safeguards. The rapid evolution of generative AI suggests that evidentiary practice may tip the scale in favor of more rigorous verification processes very soon.
DISCLAIMER
This publication may constitute attorney advertising under the laws and rules of professional conduct of one or more states. The information provided in this publication is for general informational purposes only and does not constitute legal advice. The contents are not intended to be a substitute for professional legal advice, consultation, or representation. No attorney-client relationship is formed by reading or relying on this publication. Prior results do not guarantee a similar outcome. Readers should consult a qualified attorney for advice regarding their individual circumstances or any specific legal questions they may have.
If you have questions about this publication, please contact Adam Friedman, Ralph Vartolo or Michael DeRosa,
Friedman Vartolo LLP, 1325 Franklin Avenue, Suite 160, Garden City, NY 11530, Phone: (212) 471-5100 | Fax: (212) 471-5150.




