Vinatha Babyprakash
Product designer with over 14 years of experience shaping enterprise SaaS and human-AI experiences
Selected Work
Here is a sample of user-centric design projects I’ve worked on.
CASE STUDY
AI-Assisted Document Annotation & Case Management
2023-2026
FinTrU’s KYC annotation platform supports high-volume document processing for regulatory compliance. The platform evolved from fully manual annotation workflows to AI-assisted, human-in-the-loop processing where machine learning models now perform classification and much of the initial extraction, with analysts validating and correcting outputs.
Role Context at FinTrU
I was hired by FinTrU to lead the design function, managing a team of three to four product designers and two UX researchers. My remit was to define design strategy, set quality standards, and guide the team in delivering software solutions to improve the operational efficiency of FinTrU’s KYC services for major banking clients including Morgan Stanley, RBC, and Santander.
This was a 0–1 product initiative focused on transforming highly manual compliance operations into scalable software-supported workflows. We began by deeply studying how service teams processed KYC documentation manually, conducting detailed workflow analysis and mapping complex end-to-end journey maps to uncover inefficiencies and decision points.
Working closely with product, engineering, and data science teams, we translated these insights into software-driven workflows that incrementally enhanced operations and later incorporated AI models to assist with document classification, data extraction, and validation. Through an iterative, research-led approach, we designed a platform that improved productivity while preserving the accuracy, transparency, and regulatory control required in highly regulated banking environments.
Understanding the problem space
FinTrU’s KYC operations relied heavily on manual document processing and were increasingly difficult to scale.
Due to the nature of service-led, human-only workflows, document classification, data extraction, and evidence validation required significant manual effort across high-volume cases. From an operational perspective, this resulted in slower turnaround times, analyst fatigue, and limited efficiency gains as client volumes increased.
I ran discovery sessions and workflow workshops with service teams, compliance stakeholders, and product partners to identify operational bottlenecks and decision points.
Using these insights, I led a team of designers and researchers to map complex end-to-end journeys and define software-led workflows that could be taken forward into prototyping and iterative validation, forming the foundation for AI-assisted solutions.


Research & Discovery
Qualitative Analysis
Contextual interviews with KYC analysts, reviewers, supervisors
Workflow shadowing during live KYC case handling
Tool walkthroughs of existing spreadsheets, inboxes, and annotation tools
Quantitative & Artefact Analysis
Time-and-motion studies of manual annotation workflows
Audit log reviews to understand compliance requirements
Audit log reviews to understand compliance requirements
Case lifecycle analysis (delays, rework loops, handoffs)
Validation
Usability testing with interactive prototypes
Pilot feedback from production-like environments
Key Behavioural Insights
Analysts
Observations
Edge Cases
Insights
Reviewers
Observations
Edge Cases
Insights
Supervisors
Observations
Edge Cases
Insights
Before-After Workflow
Before:
Before AI-assisted workflows, analysts manually classified documents, extracted data, selected evidences, and updated case statuses across disconnected tools. Every step required full manual effort, resulting in high cognitive load, frequent context switching, and limited scalability as volumes increased.
After:
With AI-assisted, human-in-the-loop workflows, document classification and evidence suggestions are prefilled by AI, allowing analysts and reviewers to focus on validation and exceptions. Integrated workflows, role-specific views, and traceable review states reduced manual effort while maintaining compliance, transparency, and operational control.

Platform Outcomes
Efficiency & Scale
Case throughput per analyst
Steady increase
Reduced manual annotation effort
50% +
Outreach completion time
35% faster
Accuracy & Compliance
Reduction in rework loops
Steady increase
Reviewer rejection rates
Steady increase
Analysts
Onboard new clients directly without customisation
80%+
Adoption of AI-assisted workflows across teams
100%
UX Success Metrics
Efficiency
Time-on-task for document validation
Time to identify missing information
Number of clicks / context switches per case
Error Reduction
Annotation correction frequency
Missed evidence rates
Incorrect case status transitions
Cognitive Load
Task completion without external tools
Analyst-reported fatigue during long sessions
User Feedback
Analysts
“Validation feels more like supervision than manual work”
Analysts
“Don’t want to go back earlier ways of working with tool.”
Analysts
“I feel more confident to handle complex cases”
Operations & Supervisors
“I have increased visibility into case progress and bottlenecks”
Operations & Supervisors
“Workload distribution has become easier”
Compliance & Risk
“A noticeable change and stronger alignment with regulatory expectations”
Interaction Design Portfolios

Annotation Tool
Traditional annotation was a multi step regulatory process. Annotators had to constantly switch contexts between various tools. So we needed to give them a single platform where they could view, annotate and submit documents. At the same time, data science team was in dire need for true KYC data to train models. So we combined the two.

Case Management
Case information was multilayered and complicated. However case workers needed to refer to simultaneous information pertaining to a case to make an informed decision. The existing UI used tabs which was time taking when user had to refer to various pieces of information across tabs. So explored to have a workstation where different pieces of information were readily available to the case analyst.

AI Model Settings
Regulatory audit requirements demanded that AI could only be introduced strategically and human involvement in reviewing final decisions was mandatory. Also, the accuracy threshold across document types were not uniform. So explored options for users to set accuracy thresholds to determined the level of AI involved in annotation process.

Entity Summariser
Many times multiple documents are annotated against a single entity. The information collected against a KYC requirements across multiple documents can be conflicting. In practice, human annotators follow certain rules to resolve these conflicts. We decided to create a rule based engine that helps to automatically resolve data conflicts.
Design System & Scale Considerations
As the platform grew, consistency became a feature. I led the creation of shared components and interaction patterns—especially for AI validation and compliance workflows—which reduced rework, improved accessibility, and allowed teams to ship faster without fragmenting the experience.
My Role
Accessibility & Inclusive nature


Collaboration & Influence
As the platform lead, my role went beyond design execution. I partnered deeply with PM, engineering, data science, and compliance to make system-level decisions—especially around AI responsibility, compliance checkpoints, and scalability. A big part of my impact was aligning multiple teams around shared workflows and principles.
Product Management
Engineering & Architecture
Data Science
Compliance & Risk
Delivery
Trade-offs Negotiated with Stakeholders
Aligning Teams
Reflection & Learnings
This project reinforced that successful AI in regulated environments is about trust, accountability, and scalability. The biggest lesson was designing systems that grow with both AI maturity and regulatory confidence.
What Worked & What I Learned
What I’d Do Differently
Vinatha Babyprakash
Product designer with over 14 years of experience shaping enterprise SaaS and human-AI experiences
Selected Work
Here is a sample of user-centric design projects I’ve worked on.
CASE STUDY
AI-Assisted Document Annotation & Case Management
2023-2026
FinTrU’s KYC annotation platform supports high-volume document processing for regulatory compliance. The platform evolved from fully manual annotation workflows to AI-assisted, human-in-the-loop processing where machine learning models now perform classification and much of the initial extraction, with analysts validating and correcting outputs.
Role Context at FinTrU
I was hired by FinTrU to lead the design function, managing a team of three to four product designers and two UX researchers. My remit was to define design strategy, set quality standards, and guide the team in delivering software solutions to improve the operational efficiency of FinTrU’s KYC services for major banking clients including Morgan Stanley, RBC, and Santander.
This was a 0–1 product initiative focused on transforming highly manual compliance operations into scalable software-supported workflows. We began by deeply studying how service teams processed KYC documentation manually, conducting detailed workflow analysis and mapping complex end-to-end journey maps to uncover inefficiencies and decision points.
Working closely with product, engineering, and data science teams, we translated these insights into software-driven workflows that incrementally enhanced operations and later incorporated AI models to assist with document classification, data extraction, and validation. Through an iterative, research-led approach, we designed a platform that improved productivity while preserving the accuracy, transparency, and regulatory control required in highly regulated banking environments.
Understanding the problem space
FinTrU’s KYC operations relied heavily on manual document processing and were increasingly difficult to scale.
Due to the nature of service-led, human-only workflows, document classification, data extraction, and evidence validation required significant manual effort across high-volume cases. From an operational perspective, this resulted in slower turnaround times, analyst fatigue, and limited efficiency gains as client volumes increased.
I ran discovery sessions and workflow workshops with service teams, compliance stakeholders, and product partners to identify operational bottlenecks and decision points.
Using these insights, I led a team of designers and researchers to map complex end-to-end journeys and define software-led workflows that could be taken forward into prototyping and iterative validation, forming the foundation for AI-assisted solutions.


Research & Discovery
Qualitative Analysis
Contextual interviews with KYC analysts, reviewers, supervisors
Workflow shadowing during live KYC case handling
Tool walkthroughs of existing spreadsheets, inboxes, and annotation tools
Quantitative & Artefact Analysis
Time-and-motion studies of manual annotation workflows
Audit log reviews to understand compliance requirements
Audit log reviews to understand compliance requirements
Case lifecycle analysis (delays, rework loops, handoffs)
Validation
Usability testing with interactive prototypes
Pilot feedback from production-like environments
Key Behavioural Insights
Analysts
Observations
Edge Cases
Insights
Reviewers
Observations
Edge Cases
Insights
Supervisors
Observations
Edge Cases
Insights
Before-After Workflow
Before:
Before AI-assisted workflows, analysts manually classified documents, extracted data, selected evidences, and updated case statuses across disconnected tools. Every step required full manual effort, resulting in high cognitive load, frequent context switching, and limited scalability as volumes increased.
After:
With AI-assisted, human-in-the-loop workflows, document classification and evidence suggestions are prefilled by AI, allowing analysts and reviewers to focus on validation and exceptions. Integrated workflows, role-specific views, and traceable review states reduced manual effort while maintaining compliance, transparency, and operational control.

Platform Outcomes
Efficiency & Scale
Case throughput per analyst
Steady increase
Reduced manual annotation effort
50% +
Outreach completion time
35% faster
Accuracy & Compliance
Reduction in rework loops
Steady increase
Reviewer rejection rates
Steady increase
Analysts
Onboard new clients directly without customisation
80%+
Adoption of AI-assisted workflows across teams
100%
UX Success Metrics
Efficiency
Time-on-task for document validation
Time to identify missing information
Number of clicks / context switches per case
Error Reduction
Annotation correction frequency
Missed evidence rates
Incorrect case status transitions
Cognitive Load
Task completion without external tools
Analyst-reported fatigue during long sessions
User Feedback
Analysts
“Validation feels more like supervision than manual work”
Analysts
“Don’t want to go back earlier ways of working with tool.”
Analysts
“I feel more confident to handle complex cases”
Operations & Supervisors
“I have increased visibility into case progress and bottlenecks”
Operations & Supervisors
“Workload distribution has become easier”
Compliance & Risk
“A noticeable change and stronger alignment with regulatory expectations”
Interaction Design Portfolios

Annotation Tool
Traditional annotation was a multi step regulatory process. Annotators had to constantly switch contexts between various tools. So we needed to give them a single platform where they could view, annotate and submit documents. At the same time, data science team was in dire need for true KYC data to train models. So we combined the two.

Case Management
Case information was multilayered and complicated. However case workers needed to refer to simultaneous information pertaining to a case to make an informed decision. The existing UI used tabs which was time taking when user had to refer to various pieces of information across tabs. So explored to have a workstation where different pieces of information were readily available to the case analyst.

AI Model Settings
Regulatory audit requirements demanded that AI could only be introduced strategically and human involvement in reviewing final decisions was mandatory. Also, the accuracy threshold across document types were not uniform. So explored options for users to set accuracy thresholds to determined the level of AI involved in annotation process.

Entity Summariser
Many times multiple documents are annotated against a single entity. The information collected against a KYC requirements across multiple documents can be conflicting. In practice, human annotators follow certain rules to resolve these conflicts. We decided to create a rule based engine that helps to automatically resolve data conflicts.
Design System & Scale Considerations
As the platform grew, consistency became a feature. I led the creation of shared components and interaction patterns—especially for AI validation and compliance workflows—which reduced rework, improved accessibility, and allowed teams to ship faster without fragmenting the experience.
My Role
Accessibility & Inclusive nature


Collaboration & Influence
As the platform lead, my role went beyond design execution. I partnered deeply with PM, engineering, data science, and compliance to make system-level decisions—especially around AI responsibility, compliance checkpoints, and scalability. A big part of my impact was aligning multiple teams around shared workflows and principles.
Product Management
Engineering & Architecture
Data Science
Compliance & Risk
Delivery
Trade-offs Negotiated with Stakeholders
Aligning Teams
Reflection & Learnings
This project reinforced that successful AI in regulated environments is about trust, accountability, and scalability. The biggest lesson was designing systems that grow with both AI maturity and regulatory confidence.
What Worked & What I Learned
What I’d Do Differently
Vinatha Babyprakash
Strategic UX designer and researcher with 14+ years shaping complex regulated environments, enterprise SaaS and human-AI experiences
Selected Work
Here is a sample of user-centric design projects I’ve worked on.
CASE STUDY
AI-Assisted Document Annotation & Case Management
2023-2026
FinTrU’s KYC annotation platform supports high-volume document processing for regulatory compliance. The platform evolved from fully manual annotation workflows to AI-assisted, human-in-the-loop processing where machine learning models now perform classification and much of the initial extraction, with analysts validating and correcting outputs.
Role Context at FinTrU
I was hired by FinTrU to lead the design function, managing a team of three to four product designers and two UX researchers. My remit was to define design strategy, set quality standards, and guide the team in delivering software solutions to improve the operational efficiency of FinTrU’s KYC services for major banking clients including Morgan Stanley, RBC, and Santander.
This was a 0–1 product initiative focused on transforming highly manual compliance operations into scalable software-supported workflows. We began by deeply studying how service teams processed KYC documentation manually, conducting detailed workflow analysis and mapping complex end-to-end journey maps to uncover inefficiencies and decision points.
Working closely with product, engineering, and data science teams, we translated these insights into software-driven workflows that incrementally enhanced operations and later incorporated AI models to assist with document classification, data extraction, and validation. Through an iterative, research-led approach, we designed a platform that improved productivity while preserving the accuracy, transparency, and regulatory control required in highly regulated banking environments.
Understanding the problem space
FinTrU’s KYC operations relied heavily on manual document processing and were increasingly difficult to scale.
Due to the nature of service-led, human-only workflows, document classification, data extraction, and evidence validation required significant manual effort across high-volume cases. From an operational perspective, this resulted in slower turnaround times, analyst fatigue, and limited efficiency gains as client volumes increased.
I ran discovery sessions and workflow workshops with service teams, compliance stakeholders, and product partners to identify operational bottlenecks and decision points.
Using these insights, I led a team of designers and researchers to map complex end-to-end journeys and define software-led workflows that could be taken forward into prototyping and iterative validation, forming the foundation for AI-assisted solutions.


Research & Discovery
Qualitative Analysis
Contextual interviews with KYC analysts, reviewers, supervisors
Workflow shadowing during live KYC case handling
Tool walkthroughs of existing spreadsheets, inboxes, and annotation tools
Quantitative & Artefact Analysis
Time-and-motion studies of manual annotation workflows
Audit log reviews to understand compliance requirements
Audit log reviews to understand compliance requirements
Case lifecycle analysis (delays, rework loops, handoffs)
Validation
Usability testing with interactive prototypes
Pilot feedback from production-like environments
Key Behavioural Insights
Analysts
Observations
Edge Cases
Insights
Reviewers
Observations
Edge Cases
Insights
Supervisors
Observations
Edge Cases
Insights
Before-After Workflow
Before:
Before AI-assisted workflows, analysts manually classified documents, extracted data, selected evidences, and updated case statuses across disconnected tools. Every step required full manual effort, resulting in high cognitive load, frequent context switching, and limited scalability as volumes increased.
After:
With AI-assisted, human-in-the-loop workflows, document classification and evidence suggestions are prefilled by AI, allowing analysts and reviewers to focus on validation and exceptions. Integrated workflows, role-specific views, and traceable review states reduced manual effort while maintaining compliance, transparency, and operational control.

Platform Outcomes
Efficiency & Scale
Case throughput per analyst
Steady increase
Reduced manual annotation effort
50% +
Outreach completion time
35% faster
Accuracy & Compliance
Reduction in rework loops
Steady increase
Reviewer rejection rates
Steady increase
Analysts
Onboard new clients directly without customisation
80%+
Adoption of AI-assisted workflows across teams
100%
UX Success Metrics
Efficiency
Time-on-task for document validation
Time to identify missing information
Number of clicks / context switches per case
Error Reduction
Annotation correction frequency
Missed evidence rates
Incorrect case status transitions
Cognitive Load
Task completion without external tools
Analyst-reported fatigue during long sessions
User Feedback
Analysts
“Validation feels more like supervision than manual work”
Analysts
“Don’t want to go back earlier ways of working with tool.”
Analysts
“I feel more confident to handle complex cases”
Operations & Supervisors
“I have increased visibility into case progress and bottlenecks”
Operations & Supervisors
“Workload distribution has become easier”
Compliance & Risk
“A noticeable change and stronger alignment with regulatory expectations”
Interaction Design Portfolios

Annotation Tool
Traditional annotation was a multi step regulatory process. Annotators had to constantly switch contexts between various tools. So we needed to give them a single platform where they could view, annotate and submit documents. At the same time, data science team was in dire need for true KYC data to train models. So we combined the two.

Case Management
Case information was multilayered and complicated. However case workers needed to refer to simultaneous information pertaining to a case to make an informed decision. The existing UI used tabs which was time taking when user had to refer to various pieces of information across tabs. So explored to have a workstation where different pieces of information were readily available to the case analyst.

AI Model Settings
Regulatory audit requirements demanded that AI could only be introduced strategically and human involvement in reviewing final decisions was mandatory. Also, the accuracy threshold across document types were not uniform. So explored options for users to set accuracy thresholds to determined the level of AI involved in annotation process.

Entity Summariser
Many times multiple documents are annotated against a single entity. The information collected against a KYC requirements across multiple documents can be conflicting. In practice, human annotators follow certain rules to resolve these conflicts. We decided to create a rule based engine that helps to automatically resolve data conflicts.
Design System & Scale Considerations
As the platform grew, consistency became a feature. I led the creation of shared components and interaction patterns—especially for AI validation and compliance workflows—which reduced rework, improved accessibility, and allowed teams to ship faster without fragmenting the experience.
My Role
Accessibility & Inclusive nature


Collaboration & Influence
As the platform lead, my role went beyond design execution. I partnered deeply with PM, engineering, data science, and compliance to make system-level decisions—especially around AI responsibility, compliance checkpoints, and scalability. A big part of my impact was aligning multiple teams around shared workflows and principles.
Product Management
Engineering & Architecture
Data Science
Compliance & Risk
Delivery
Trade-offs Negotiated with Stakeholders
Aligning Teams
Reflection & Learnings
This project reinforced that successful AI in regulated environments is about trust, accountability, and scalability. The biggest lesson was designing systems that grow with both AI maturity and regulatory confidence.
What Worked & What I Learned
What I’d Do Differently