Did you know that about 10% of the world’s population has some level of color vision deficiency?
Honestly, I don’t even like the term “deficiency.” It sounds like some people are better than others, which isn’t the point at all. The reality is, if they don’t tell you, you probably won’t even notice.
This past week, I dove deep into color blindness, and the article Designing with Color Blindness in Mind sparked a lot of discussion. It opened my eyes to something surprising: so many people around me are colorblind , and I had no idea.
A few colleagues, my dad, even new friends I just met, all shared stories once the topic came up. It’s way more common than I thought. As it turns out, about 1 in 10 people experience some level of color vision difference. Mind blown.
But here’s something even weirder: when you look at color blindness simulation tools, they always translate “normal” vision into “deficiency.” Never the other way around. No one tries to show what “normal” colors might look like to someone with CVD (Color Vision Deficiency). I get that it’s technically hard, but even a simple description could help!

The MVP Idea
Lately, I’ve been playing with AI to depict images from different perspectives.
That sparked an idea: what if I could build a minimal viable product (MVP) that tackles two things?
1. A CVD simulation page:
Upload an image and display major types of color blindness views.
Highlight problematic/confusing color areas.
Suggest replacement colors.
2. A color identification page:
Upload an image, click anywhere, and show the color name for “normal vision”.
Provide a natural language description to help the color vividly appear in the user’s mind.
It felt like the perfect extension of the everything I’d been learning.
Product Building Journey
If you’ve been following me, you know I love using Cursor to build things. This time, my approach was slightly different:
First, I chatted with ChatGPT about whether it was even feasible.
Then, I fed that conversation into Cursor to come up with a full-blown plan.
After that, off to building!
Technical Learnings and Fun Issues
Color Simulation
Simulating color blindness meant transforming RGB values using different matrices. Here’s a taste:
export const DirectRGBMatrices = {
normal: [1, 0, 0, 0, 1, 0, 0, 0, 1],
protanopia: [0.567, 0.433, 0, 0.558, 0.442, 0, 0, 0.242, 0.758],
deuteranopia: [0.625, 0.375, 0, 0.7, 0.3, 0, 0, 0.3, 0.7],
tritanopia: [0.95, 0.05, 0, 0, 0.433, 0.567, 0, 0.475, 0.525]
};
Basically, you:
Convert RGB to LMS (Long, Medium, Short wavelength space).
Apply a color blindness matrix.
Convert back to RGB.
For more accurate simulation, I (no… Cursor) even implemented the Brettel-Viénot-Mollon algorithm — a classic model from 1997.
And yes, it’s all accelerated with WebGL for real-time browser simulation. Wild.
The Magic of AI
Integrating AI opened a totally new door:
AI does two things:
Generate rich color descriptions: Based on color names, RGB, and CVD type, the AI describes what the color “feels” like.
Suggest color alternatives: If two colors are too confusing for someone with color blindness, AI suggests better, more distinguishable options.
Here’s a sample of the AI prompt:
const prompt = `Analyze the color ${colorName} (RGB: ${r}, ${g}, ${b}, HEX: ${hexCode}).
... recommend alternatives for accessibility.`;
And of course, if AI isn’t available (because of API limits, etc.), I (asked Cursor and made it) built a fallback system using traditional color theory.
Fallback example:
Enhance blue channel contrast for red-green color blindness.
Adjust brightness and hue where needed.
A good old algorithm saves the day when AI is too busy.
The Power of Combining Approaches
Watching how traditional programming and AI work together is beautiful. Matrix transformations simulate the technical side. AI brings the human-centered design piece into it.
The app now smartly switches between AI and algorithms based on:
API availability
Authentication
Rate limits
Previous errors
It’s a realistic hybrid system: advanced where possible, reliable when necessary.
Final Thoughts
You might be thinking, “Wait, is this really a GenAI project?” I had that exact doubt myself.
I mean, I’m not installing any fancy AI libraries, not training models from scratch.
But here’s what I realized: the amount of new ideas, new knowledge, and new understanding I gained would never have happened without AI helping me explore this space. And that’s exactly the power of Generative AI: it’s not just about writing code or models. It’s about expanding how we think and what we can build.
In this case, the code, the learning journey, and the final product, all of it is thanks to generative AI.
Try It Out
If you’re curious, feel free to give it a try:
👉 https://colorvisionenhancer.xyz/
Got feedback? Disagree with a feature? Have ideas for new improvements? I’d love to hear your thoughts, send a contact form, or drop a comment below!
And huge shoutout to Cursor! It’s absolutely insane that within one week, just a few hours each night, I could build a full product from scratch to deployment.
One person. One week. One idea turned real.
Honestly, I’m still amazed.
Thanks for reading! If you enjoyed this journey, stay tuned for more crazy ideas from my GenAI 30 Challenge adventures.
References:
[¹]: Brettel, H., Viénot, F., & Mollon, J. D. (1997). Computerized simulation of color appearance for dichromats. Journal of the Optical Society of America A, 14(10), 2647–2655.
[²]: Machado, G. M., Oliveira, M. M., & Fernandes, L. A. (2009). A physiologically-based model for simulation of color vision deficiency. IEEE Transactions on Visualization and Computer Graphics, 15(6), 1291–1298.
[³]: Meyer, G. W., & Greenberg, D. P. (1988). Color-defective vision and computer graphics displays. IEEE Computer Graphics and Applications, 8(5), 28–40.
That's fantastic! Such a great initiative. Thank you Jenny!
This is truly fantastic. I’ve been reading some of your work and I am very impressed. My writing is usually Adderall fueled and done after 3-4 nights awake, so it’s nice to see so much attention to detail. Makes me think it was optimized by Mr. GPT… ;)
Anyways, it stood out to me that this took a week to build with Cursor. I am an “actual” programmer turned “vibe coder”. With that being said, I definitely feel like we could cut your build times down significantly by implementing some practices I use when vibing.
I would love to have a discussion with you about that; perhaps we could collaborate on an article or two about our findings. I haven’t ever attempted to optimize someone’s vibing before so I think it could be fun.
If you want to read about the basic components of my workflow, see my recent (and messy) article. It should give you an idea of how I am able to ship quickly.
Well wishes,
HS