On March 24, attackers published poisoned versions of litellm to PyPI, compromising the transitive dependency tree of CrewAI, DSPy, Browser-Use, and a dozen other AI frameworks. The payload harvested credentials, deployed Kubernetes worms, and installed persistent backdoors.
David
|
Mar 25, 2026
Shift-left your quantum readiness by adding cryptographic policy enforcement to your build pipeline.
David Daniel
|
Mar 04, 2026
Manual crypto audits miss critical dependencies. Here's how automated discovery changes the game.
David Daniel
|
Mar 04, 2026
Harvest Now, Decrypt Later attacks are already happening. Here's why waiting for quantum computers is the wrong strategy.
David Daniel
|
Mar 04, 2026
Large language models are transforming enterprise workflows, but not in the ways most people expect.
David Daniel
|
Mar 04, 2026
How we built an AI system to automatically detect errors in GOES and VIIRS satellite imagery for NOAA's Earth Observation Digital Twin project.
David Daniel
|
Mar 04, 2026
The difference between a demo and a production ML system is reliability. Here's what we've learned.
David Daniel
|
Mar 04, 2026
Most enterprise AI pilots never make it to production. Here's our battle-tested framework for bridging the gap.
David Daniel
|
Mar 04, 2026