About Me

Shane Zhang - On Freedom, Knowledge, and Building AI That Serves Humanity

I build AI systems that accelerate human thinking without replacing human judgment. Not because it’s the technically optimal solution, but because I believe AI should be the car that gets you where you want to go—but the destination is always yours to choose.

This belief isn’t just a design principle. It’s the result of years spent thinking about what freedom means, what knowledge enables, and what kind of future I want to help build.

On Freedom & Knowledge

Most people talk about freedom as if it’s absolute—as if you can do anything you want at any moment. But I think about it differently.

Imagine your life as a tree. When you’re born, you’re at the root. At that moment, you have a certain amount of knowledge, and that knowledge determines how many branches—how many choices—are available to you. Each choice you make takes you down one branch, and from that new position, you have a new set of choices.

Can you reach any leaf on the tree from where you are right now? No. You’re already on a specific path. But can you gradually move toward the direction you want? Yes—because at each moment, you have the freedom to make choices based on the knowledge you have.

This is why I say freedom comes from knowledge. If you don’t know AI exists, you can’t choose to use it. If you don’t know there’s a better way, you can’t choose it. You don’t know what you don’t know.

And this matters deeply for how I build AI systems. Most AI companies today expect users to have technical knowledge—to write good prompts, to understand context windows, to know how to get good results. But that’s backwards. The value of engineering is to bridge the gap between what technology can do and what non-technical people need. That’s where real freedom lies: giving people choices they didn’t know they had.

On AI & Humanity

Here’s what keeps me up at night: AI doesn’t have emotions. It doesn’t have a sense of living, of thriving, of becoming a better version of itself. But humans do.

And I don’t want to build—or live in—a world where corporations use AI to take over people’s lives. I want to build AI that helps people better their own lives.

That’s why I think of AI as a car. Before cars, you could walk, and you could only get so far. With a car, you can go anywhere you want. But here’s the critical part: where you go, and which path you take to get there—that’s your choice. Not the car’s. Not mine as the engineer. Yours.

At Empath Legal, this principle shapes everything we build. We work with lawyers on high-stakes decisions—jury selection, case strategy. The AI accelerates their analysis, surfaces patterns they might miss, handles the tedious labor. But it never makes the decision for them. It never nudges their opinion. It never misleads.

Because the moment AI starts making choices for humans, we lose something essential. We lose agency. We lose the struggle and growth that comes from making our own decisions. And we lose the sense of meaning that comes from creating something with our own hands, our own minds.

The Journey Here

I didn’t always know I wanted to build AI. I started with robotics—not really by choice. Where I was born, you don’t always get to choose what you study. But I knew I wanted to understand human cognition.

When I transferred to the US and chose computer science at Rutgers, something clicked. I took classes in cognitive science, which bound together what I knew about computers with how humans think. I studied sociology to understand human behavior. Philosophy classes—especially Eastern philosophy like Zhuangzi—gave me a framework for thinking about freedom, nature, and how individuals fit into the world.

One class that surprisingly impacted me deeply was on Tolstoy’s War and Peace. I came from an Eastern background where emotions should be buried, kept internal. But Tolstoy’s emotional honesty, his willingness to be vulnerable and complex—it opened something in me. It made me realize that being fragile, being emotional, being fully human is not a weakness.

And then there was Steve Jobs. I read his biography in high school, and something about his approach resonated: challenge the norms, be yourself regardless of what the world thinks, build something amazing based on your vision even when everyone says it’s bad. That stubbornness, that refusal to compromise on what matters—that’s what I want to embody.

I worked at several companies—Fiskkit, Robert Wood Johnson, Citigroup. Each taught me something valuable. But at Citi, I realized something important: in large companies, you’re a cog in a machine. You don’t need to work hard to stay there, and they don’t need to keep you just because you contribute. You learn limited amounts because you never see the big picture.

And I realized: empires aren’t built by superhumans. They’re built by normal people doing normal jobs. So why not me? Why not build something based on my vision, my values?

Why I Build

There’s a joy in writing elegant code, in seeing CI/CD pipelines pass, in solving thorny technical problems. Engineering is an art form—there are infinite ways to implement any system, and how you do it reflects who you are.

But that’s not why I build.

I build because of the moment when a lawyer sees what we’ve created and says, “This is exactly what we wanted.” When they tell us they used to spend hours on tedious analysis, and now they can focus on the creative, strategic parts of their work. When they realize they’ve been spending significant money on mediocre reports, and our AI produces something genuinely useful.

Those moments—seeing someone’s life get better because of something I built with my own hands—that’s what makes it worth it. That’s what human life is about: making others’ lives better, reducing the burden of labor so they can focus on creativity, on what makes them distinctly human.

Working with Grant, my co-founder, has been effortless in the best way. He’s deeply technical in his legal thinking, and I trust him completely on user experience. I yield to him on what users need, and he trusts me on the technical side. The result is we build products that actually help people instead of confusing them.

We’re not trying to impress anyone. We’re not trying to build a unicorn. We’re trying to build something meaningful—something that reflects our belief that AI should accelerate human capability, not replace human agency.

What Matters

My parents aren’t technical. They treat AI like an all-knowing oracle—you ask a question, and it gives you an accurate answer. That’s how most non-technical people see AI, and it’s a problem.

Because AI isn’t an oracle. It’s a tool. And like any tool, its value depends on how well it’s designed to fit the people who use it.

That’s what I care about: bridging the gap. Making powerful technology accessible to people who don’t have technical expertise. Ensuring that AI serves human needs rather than demanding that humans adapt to AI’s limitations.

I used to work on projects where we measured success by metrics: API calls handled, percentage improvements, downtime statistics. Those numbers were meant to impress corporate stakeholders.

But I don’t care about impressing corporate stakeholders anymore. I care about impressing the people who actually understand what’s hard, what’s real, what matters. I care about building systems that work reliably, that respect user agency, that solve genuine problems.

I dream of witnessing the first AGI. But in the meantime, what makes the day-to-day meaningful is this: every line of code I write, every system I architect, every decision I make about how AI should behave—it’s all in service of a vision where AI and humanity collaborate rather than clash.

Where freedom comes from knowledge.

Where the car takes you anywhere you want to go.

Where you always choose the destination.

Last updated: 2025-01-18 8 min read

© 2022 - 2025 Shane Zhang

All Rights Reserved