
From fragmented search results to structured evidence
Legal research is complex by nature.
But complexity alone does not justify confusion.
When I joined the project at the beginning of 2024, the product faced critical usability challenges. Users struggled to retrieve meaningful results, navigate court decisions, and understand how outputs related to their legal questions.
What followed was not a linear optimization process. It was a fundamental re-evaluation of interaction design, information architecture, and product direction.
About a year into our collaboration, the company underwent a major transformation. iur.crowd evolved into Anita — a rebranding that introduced radical simplicity as a guiding strategic principle. This shift did not initiate the product redesign, but it sharpened and reinforced the direction we had already begun to take.
This case study documents that evolution.
From repairing a broken core interaction to rethinking document navigation, navigating a brand transformation, and ultimately introducing a contextual AI layer designed to reduce cognitive load rather than amplify it.
At its core, this project was not about adding features.
It was about removing friction, restoring orientation, and designing systems users could trust.
Role
Product Designer (Freelance)
Scope
Search interaction redesign
Information architecture restructuring
Judicial decision analysis
AI chatbot integration
Brand-aligned interface redesign
Focus Areas
Improving core usability
Reducing cognitive load
Strengthening trust in AI-supported outputs
Creating clear orientation within complex legal documents
Impact
Stabilized core search flow
Reduced usability breakdowns in testing
Increased perceived trust through evidence-backed responses
Established a scalable foundation for future product iterations
Understanding the user before fixing the product
Before redefining the product experience, I needed to understand the professionals it was built for.
Through recorded legacy interviews and newly conducted qualitative user interviews, I analyzed how legal practitioners approach research in their daily work. Patterns across conversations quickly became clear. Time pressure dominates their workflow. Legal research is not an isolated task. It happens between client calls, court appointments, drafting documents, and administrative responsibilities.
From these insights, I developed a primary user persona to ground all subsequent design decisions.
Alexander represents a pragmatic and highly busy legal professional. As the owner of a law firm specializing in commercial legal protection, his days are tightly scheduled. Research must be efficient, structured, and reliable. He does not seek exploration. He seeks clarity.
User persona

Repairing the core search interaction
At the beginning of 2024, the product was in a highly critical state from a usability perspective.
Users consistently struggled to obtain any meaningful research results. Not because the underlying data was insufficient, but because the interaction model itself blocked successful usage.
The central issue was the decision to introduce a dual search bar, requiring users to enter simultaneously.:
a factual description (“Sachverhalt”) and
an argument
This concept proved unintuitive, cognitively demanding, and fundamentally misaligned with how users naturally approach legal research tasks.
Users felt lost and disoriented when interacting with the existing design. The lack of structure and clear guidance led to a frustrating experience, with users wandering aimlessly across the screen without finding the information they needed. By introducing a logical information architecture and intuitive navigation, I aimed to create a design that would effortlessly guide users to their desired outcomes.

Home Screen 01.2024:
Users need to fill out 2 search bars - without having a clue how to fill them.

Revised Home Screen 03.2024:
Users could search for court decisions based on their natural behavior.
Simplifying the search entry point
Problem
The dual search bar forced users to enter a factual description and an argument simultaneously, a pattern fundamentally misaligned with natural research behavior. Instead of supporting search, the interaction model actively blocked it.
Why it mattered
Users could not progress through even basic research tasks. The interface introduced unnecessary cognitive load and created constant confusion about what the system expected.
Evidence from testing
205 cumulative error rating points across 6 participants (indicating severe breakdowns in core flows).
Product satisfaction critical low: 2.8 / 7 on likert scale
Representative quotes:
“Am I supposed to search for two things at once?”
“I don’t understand what the second field expects.”
What we tried
Improving instructions, modifying placeholder texts, and adjusting the weighting between the two fields. None of these mitigations reduced confusion.
What went wrong
The concept itself was flawed. Users do not formulate arguments upfront. Requiring parallel input introduced friction and prevented meaningful search results.
Design decision
Remove the dual search bar and replace it with a single, intuitive entry point. Rebuild the information architecture and redesign the results layer to create a coherent and guided research experience.
Structuring judicial reasoning (and why it failed)
After improving the search interaction, our next step was to help users work more effectively within the court decisions themselves. At this time, the team pursued an ambitious idea: to extract the logical structure of each decision, break the judicial reasoning into discrete argument segments, and present these segments in a dedicated navigation bar.
The intention was clear: reduce the cognitive burden of reading long, unstructured documents by giving users a high-level map of the argumentation. In theory, this would allow legal professionals to jump directly to relevant passages, understand the judgment flow, and move through decisions more strategically.
In practice, however, the concept proved fragile.
German court decisions follow no unified structure. Paragraph hierarchy, argument patterns, headings, and narrative flow vary dramatically between courts, chambers, and even authors. This variability resulted in inconsistent extraction quality and unstable argument segmentation. The system produced navigation structures that looked systematic on the surface but were not reliable enough to support real legal work.
User testing made the limitations clear:
The navigation bar forced users into a predefined structure that often did not match the document’s true reasoning.
Users struggled with the added complexity of navigating a second layer on top of the primary text.
The extracted argument elements were not sufficiently accurate to be trusted as a basis for legal interpretation.
Instead of simplifying the reading experience, the approach unintentionally introduced more cognitive load and failed to provide the clarity users needed. This phase became a crucial learning moment.
It revealed that partial structure without full reliability harms more than it helps, and that users require dependable orientation, not speculative interpretation layers.
These insights directly shaped the next major shift in product direction.

Prototype 10.2024:
The informations of the court decisions are clustered into 4 sections.

Prototype 10.2024:
A single court decision page - structured into its arguments.
Rethinking structured document navigation
Problem
We aimed to help users navigate long court decisions more effectively by extracting argument structures and presenting them in a dedicated navigation bar.
Why it mattered
Legal professionals need orientation within dense, unstructured documents. A reliable map of the judicial reasoning could dramatically reduce reading effort, if the structure is accurate.
Evidence from testing
Users described the concept as confusing and fragile. It increased cognitive load instead of reducing it. Navigation via the extracted structure frequently misaligned with the actual document logic.
What we tried
Automatic extraction of argument steps
Hierarchical segmentation of reasoning
A sidebar mirroring the assumed logic of the decision
Highlighted argumentative building blocks inside the text
What went wrong
Inconsistent document structures
German court decisions vary dramatically; extraction was unstable.Added navigation complexity
Users had to interpret two layers: text and extraction.Poor argumentation quality
The segmentation was too unreliable for legal work.Misalignment confirmed in testing
The concept slowed users down instead of supporting them.
Design decision
Stop pursuing artificial reconstruction of judicial reasoning. Shift toward more resilient support mechanisms: contextual summaries, evidence-backed insights, and guidance that does not rely on perfectly structured documents.
Brand transformation: from iur.crowd to Anita
➔
During this transitional phase, the company underwent a fundamental transformation. As part of the startup accelerator program Silicon Allee, supported by Fraunhofer HHI, iur.crowd was repositioned and rebranded as Anita.
The rebranding was led by the Creative Director of Silicon Allee, who defined the new strategic and visual foundation of the brand. Anita stands for radical simplicity within the complex ecosystem of legal research. The goal was to move away from technical density and toward clarity, reduction, and confident guidance.
In close collaboration with the Creative Director, I redesigned the application from the ground up based on this new branding vision. This included:
• A complete visual redesign
• A new color system
• Updated typography
• Refined spacing and layout principles
• A clearer visual hierarchy
• A confident interface language
The redesign was not merely aesthetic.
It marked a strategic shift toward simplicity as a product principle.
This brand transformation created the conceptual foundation for the next product evolution. The move toward radical simplicity directly influenced how we approached the chatbot layer: not as an additional feature, but as a focused interpretive tool designed to reduce complexity rather than add to it.
Introducing the contextual chatbot layer
When the limitations of document extraction became fully visible, the team recognized that a different approach was needed. An approach that could provide clarity without relying on perfectly structured texts. At the same time, the company underwent a major transformation: the former brand Iur.Crowd evolved into Anita, a new identity centered around radical simplicity in this complex information network of legal reserach.
Our user research backed this decision up: professionals needed support in interpreting search results, understanding which decisions mattered, and identifying why they were relevant. They needed a bridge between raw legal text and actionable legal reasoning.
This opened the door for a new solution: one that connected search results, contextual understanding, and evidential quotes into a coherent narrative. The next phase of the project focused on designing and integrating this interpretive layer: a chatbot interface capable of summarizing findings, referencing the original decisions, and reducing the cognitive overhead of reading each document individually.
This set the stage for one of the most impactful additions to our legal analytic tool.

Homescreen 11.2025:
The App with it's clear chatbot patterns.

Search Results & Chatbot 11.2025:
The quote from the referenced court decision in it's hover state.
Introducing the contextual chatbot layer
Problem
Even with improved search, users struggled to understand how individual court decisions related to their query. They needed contextual summaries and evidence-based links to trust the output.
Why it mattered
Without interpretation, results remained fragmented. Users spent too much time reading entire decisions and cross-checking their relevance.
Evidence from testing
Participants described the chatbot as an “evidence layer,” significantly increasing trust. Quotes directly linked to original decisions improved perceived reliability and reduced reading effort.
What we tried
Summaries of search results
Direct references to relevant quotes
UI cues for answer completion
Iteratively tested display patterns for clarity
What went wrong
Earlier versions overwhelmed users with too much information. Later iterations improved readability, pacing, and trust-building through refined referencing and visual completion signals.
Design decision
Implement an AI-driven contextual layer that:
Summarizes search results in natural language
Surfaces evidence directly from original decisions
Clarifies how results relate to the user’s question
Reduces cognitive load by acting as a guided interpretation layer
This established a reliable bridge between raw legal text and actionable insights.
Key learnings
1. Fix the interaction before scaling the system
A flawed core interaction cannot be compensated by additional features.
The dual search bar failed not because users lacked understanding, but because it contradicted their natural workflow.
If users cannot start a task smoothly, every downstream improvement remains ineffective.
2. Unreliable structure undermines trust
Extracting judicial reasoning promised orientation, but inconsistent document structures made the system fragile.
When a feature appears systematic yet behaves unpredictably, it increases cognitive load instead of reducing it.
In high-stakes domains, perceived precision must equal actual reliability.
4. AI must reduce cognitive load to create value
The chatbot succeeded when it acted as an interpretive bridge.
Summaries alone were insufficient. Trust increased only when answers were directly linked to verifiable decisions.
AI created impact through contextualization and evidence.
3. Simplicity requires strategic restraint
The rebranding to Anita sharpened a principle that had already emerged: simplicity is not visual reduction, but disciplined decision-making.
Every element must justify its existence.
Every layer must lower friction.
Clarity is a product decision, not a visual trend.
5. Research must guide direction changes
User testing did not only validate improvements.
It invalidated entire concepts.
Abandoning structured argument extraction was not a setback, but a strategic correction.
Design maturity includes knowing when to stop.
Final quote and outlook
The introduction of the contextual AI layer marked a foundational shift.
But it is not the end state.
The long-term opportunity lies in deepening contextual guidance while maintaining strict simplicity. Future iterations will focus on improving reference precision, strengthening trust signals, and continuously aligning AI interpretation with real-world legal workflows.
The guiding principle remains unchanged:
Reduce complexity.
Increase clarity.
Earn trust through reliability.