INTIENT Life Science Research Platform

A UX case study for a one-stop-shop research platform that is targeted to help pharmaceutical researchers narrow down searches and share their data studies across to with other.

INTIENT™ (derived from “intelligent patient”) is a platform that promotes efficient collaboration between chemists, biotechnicians and biologists in drug therapy development. It also helps connect patients to clinics and research teams for more intimate data gathering while reducing time to market for new therapies. This project was initially started during the Covid-19 pandemic to expedite the production of a vaccine. My role in this project was to lead a team of global designers to deliver a unified platform look and an intuitive solution architecture for these complex products.

Project date: Aug 2020

What is Intient?

The project began as multiple individual products that were trying to get a buy-in with the leaderships that were dealing with client work in the Life Science industry. It was inspired by the Covid-19 pandemic and these teams wanted to contribute using their unique technology expertise to drive value within the research sector. After demoing working prototypes to the board,  leadership was impressed and decided to combine these successful prototypes into a single platform. It was then rebranded as a smart life science ecosystem platform that connects patients, clinics and research, expediting treatments to the masses. The project had over 50 people involved in building the platform and was broken into 3 different sub-projects- INTIENT™ Research, INTIENT™ Patient and INTIENT™ Clinic. I was assigned as design lead in the INTIENT™ research team and tasked with the design for 5 different features (Data Source, Metadata Source, Data Entitlement, Target ID and Data Exploration).

The purpose of Intient RESEARCH

INTIENT Research helps scale the scientific method to deliver better therapeutic hypotheses to reduce attrition and better therapeutics that deliver more value to patients. It is done by ingesting and weaving together public data into BigQuery and Knowledge Graph so as to understand the associations between disease, targets and compounds for Drug Discovery. We leverage on GCP AI Platform to deploy, train and manage machine learning models at scale.

My Role

I joined INTIENT™ while it was still in mid-development. My understanding was that the project started out as several proof-of-value features that were driven mostly by a individual groups of engineers. Once the project got adopted by company executives, it was then planned to be combined into a single unified platform using the CORE team’s new design framework, IRIS. That was when I got pulled onboard to take on a lead role to ensure the timely delivery of reskinning the platform while drafting scalable designs for newer features. I lead a global remote team of 6 designers and worked collaboratively with both engineers and product managers who are experts in this field.

ROLE DESCRIPTION
  • Prioritising functional usability and user experience before high visual designs​
  • Gather information about product requirements and formulate test types to gather specific data types​
  • Solve problems through existing design patterns​
  • Involved in sketching, prototyping and on occasion user testing, before passing the design onto the development team​
  • Ensure adoption and compliance of the new design framework for reskinning the platform within the delivery timeframe.
VALUE CREATED
  • Conducted a design thinking workshop with product owners and lead engineers (15+ people) to refine both brown- and green-field products.​
  • Design and conducted fidelity and user testing with internal teams to drive a user centric product and direction.​
  • Governed the look and feel of all products in development to maintain unified visuals.​
  • Was nominated for an internal "Leadership DNA" award for efficient delivery on the product re-skin.
PEOPLE CONTRIBUTION
  • Lead a team of 6 designers (US, Philippines, Malaysia) to construct high-fidelity reskin screens based on improved user story flow and new design framework
  • ​Collaborating with product managers and developers by illustrating design ideas using storyboards, process flows and sitemaps.​
  • Drafted proposed product roadmap for the development and improvement of features​

Features

The research platform has 4 different features - each serves a different purpose and targeted user. While my team mostly focused on reskinning efforts and adopting the IRIS design frame work formulated by the INTIENT CORE team, I directed my efforts to developing the user flows and architecting the site maps based on information hierarchy and for long term scalability. Below showcases my approach to making a complex platform intuitive to use while maintaining compliance to the IRIS design frame work. Note: the feature Target ID requires only reskinning and is not shown.

ADMIN APP

The Admin application enables users to manage their data and metadata in the INTIENT™ Research platform. It provides an ease-of-use user interface for self-service upload of data/metadata, allowing users to integrate their onboarded data with other applications. This helps enable further research insights to be generated on the platform as a one-stop-shop. Additionally, users will be able to govern their data with data entitlement policies for specific sensitive data, allowing access only to certain privileged users. Below is a diagram that shows the 3 sub-features that makes up the admin app.

High level Site Map
1. WHAT IS DATA SOURCE

The Admin Data Source Web App serves as a platform for users to seamlessly onboard their external data onto the INTIENT Research Platform via a threefold process.

First, the user provides the requisite configurations for the platform to connect to their data residing either in a PostgreSQL database or a Google Cloud Storage data lake.

Next, the ingestion of the user’s data is triggered with the option to schedule automatic ingestion jobs in the future.

Finally, the App provides a snapshot of the ingested data along with recommended entities powered by a Machine Learning (ML) model, on a user-friendly interface. The user may augment the results of the ML model by manually tagging columns with an entity type at the click of a button.

High level USER FLOW
1a. COMPETITOR ANALYSIS AND RESEARCH

The chances of having the most original idea is rare in this day and age. There are more than likely direct competitors out there in the market. Hence, I did some research on my own to find lessons that we could learn from others and enhance our product. Below are a compilation of screenshots and notes taken from similar services that offer data ingestion and source monitoring.

Screenshots taken from https://segment.com/

Screenshots taken from https://www.stitchdata.com/ and https://www.mongodb.com/

1b. SETTING GOALS FOR MVP

As this is a new feature, setting expectations for an alpha launch is very important. It helps everyone in the team align and also agree on what needs to be developed base on limited budget and resources. The definition of a Minimum Viable Product or MVP is a development technique in which a new product is introduced in the market with basic features, but enough to get the attention of the consumers. The final product is released in the market only after getting sufficient testing and feedback from the product's initial users. As a lead, I took the business goals and broke them down into milestones and feature requirements by sorting them base on priorities. Secondly, I also broke down design tasks to help translate the user flow into multiple different edge case scenarios that a user might face so as to cater for all possible screens. Once that is done, I do a briefing with the engineers to get their take on technical feasibility before delivering high-fidelity screens for development.

MVP GOAL
Minimal Viable Product for launch

Users can ingest data from their database into the platform and assign the table entities to the external databases so as to populate more values into the table.

MUST HAVE FEATURE
Features that makes the product
  • User must be able to create a data source
  • User must be able to manage a data source (CRUD- Create, Read, Update, Delete)
  • User must be able set up a data connection
  • User must be able to set up ingestion frequency:
  • Create a Job
  • Set an ingestion schedule
  • User must be able to monitor the source:
  • Job monitoring:
  • User must be able to view all runs’ status within a job
  • Data Lineage:
  • User must be able to track all data provenience for reproducibility
  • User must be able to view all data set details
  • User must be able to assign entities to a table inside a data source
SHOULD HAVE FEATURE
Features that has high user experience impact
  • User can add and create multiple job runs
    (Use case: As a user, I’d like to get the deltas of different data that is to be ingested every 24 hours but run a complete refresh every month)
  • User can make edits to their data connection as and when they wish
    (Use case: As a user, I’d like to update my data connection for security purposes)
  • User can make edits to their data ingestion as and when
    (Use case: As a user, I wish to disable my data ingestion as there isn’t any new/ recent data for my source to ingest)
  • User can select their desired timezone to work with         
    (Use case: As a user, I wish to set the workspace to display in my timezone for ease of use)
1c. NAVIGATION AND INFORMATION HIERARCHY

Data source already had a working prototype. However, navigation was messy and difficult to navigate and scale up the feature. As this is a technical heavy SaaS app, I collaborated closely with engineers and did competitive research on similar products to formulate an intuitive navigation system. Below are diagrams that showed the initial state of the feature labelled by release versions and my revised plan.

Initial Site Map
Revised Site Map
1d. USER FLOW

The real challenge faced for this feature was time constraint and limited resources. We first needed to ensure that the product is functional before trying to make it a seamless user experience. Thus, change had to be gradual and manageable. I broke the changes down into 2 implementation stages - an interim stage that bridges the initial product with the new design framework and the MVP state that we want to achieve for public launch. In the interim stage, there is no CRUD put in place other than a create flow. The edit state reuses the same screens as create that could potentially cause confusion and high cost errors. However the Interim stage flow fulfils the goal of a workable product.In the MVP state, there is a clear distinction between different screen states, that provides positive affirmation of intent to the user and reduces unintentional accidental errors. These help ease the user experience and reduce the cost of educating users.

Interim Stage User Flow

In the flow, I have highlighted areas that are high priority with high user impact- create and error states, which can make or break the app. All other states are considered secondary. The green arrows indicates a “happy” path where the user achieves their desired goal while the red arrows indicate an “unhappy” path in which the user was prevented from reaching their goal. Each screen is broken down into steps like a detailed storyboard. The orange text next to an exclamation symbol are context notes, while the red text are comments to designers and developers working on screens as they both work in tandem.

Prior to submitting this flow as final for development reference, I had previously produced option screens where I consulted the leads. You will notice that there are blue pill labels below the screen description stating the option that was finalised and chosen for development. Most of these choices were made unilaterally for easy development due to tight delivery timelines even though they are not the best solution for a long term. For the interim, users are locked into a 3 step system for creating, monitoring and editing. While this is an easy to follow flow for the user, it complicates flexibility for changes and scaling up. The targeted user for this app is someone who has technical aptitude (Engineer or data specialist) and needs a high degree of control over the type of data sources they ingest.

MVP User Flow

In this flow, I have enabled flexibility by breaking up the steps into individual set ups. That way, user has more control over their data sources base on their needs. With clearer goals, broken down into bite-size needs, users don’t need to jump through multiple hoops in order to get to what they want to achieve. This also allows easier monitoring of data without getting lost in navigation.

One of the feedback we received was about the “Create Data Source Flow” where they mentioned that they’d like to see the error modal that pops up if a user selects "Back to Dashboard" before clicking "Continue". This modal should confirm that the user will lose all of their progress if they decide to go back to the dashboard. This should also be illustrated in any areas of this flow where selecting a button could lead to the loss of a user's progress.

This is where having a clear distinction of a CREATE, EDIT, DELETE and VIEW state is important as it addresses the confusion that users would have otherwise faced. While having all 4 states is ideal, in this case, I had to prioritise the more critical states while making sure that users are still able to achieve their goals and needs. In this scenario, unless there’s an error setting up a data source, users are less likely (out of the 4 states) to make an edit. Hence, I’ve highlighted these flows as optional for the time being. Since the app is still fairly basic, users can simply create a new data source and delete an old one. Other enhancements I’ve included are filter searches and search by keyword. I’d imagine large enterprises do deal with huge amounts of data and would like to be able to search through specific ones to manage.

1e. IMPROVEMENT BREAKDOWN

The difference between the screens from the interim stage and the MVP are the improvements made to reduce high cost errors. There are 2 types of errors that users tend to make if the design is flawed - Slips and mistakes. Slips occur when a user is on autopilot and takes the wrong actions in service of a reasonable goal. Mistakes, however, occur when a user has developed a mental model of the interface that isn’t correct and forms a goal that doesn’t suit the situation well. The MVP screens addresses these by having clearly defined screens by state and by type. It also reduces redundancy of UI that was previously implemented in the interim screens.

Dashboard
Source Overview
Data Connection
Data Ingestion
Entity Assignments
1f. PROTOTYPE SCENARIOS

Below are 3 use cases to demonstrate the different flows base on possible real-life scenarios and how they impact the user.  While I formulated the scenarios, the prototype was developed by a fellow team member from the design team.

Create Flow-  Happy Path

This prototype demonstrates the following:

  • Select/ set up time zone
  • Creating data source
  • Setting up ingestion + adding an additional Job
  • Job Monitoring
  • Entity assignment
  • Data Lineage
Edit Flow-  Happy Path

This prototype demonstrates the following scenario- Connection credentials have been changed on the database:

  • Update my connection
  • Updating the ingestion schedule
  • Remove a job
  • Edit a job schedule
  • Job monitoring
Edit Flow-  Unhappy Path

This prototype demonstrates the following:

  • Failed Ingestion (Re-trigger Ingestion)
  • Disable Ingestion
2. WHAT IS METADATA SOURCE

The Admin Metadata Source App enables the user to ingest metadata onto the INTIENT platform akin to the Admin Data Source App, albeit with fewer steps. The user simply provides the requisite configurations for the platform to retrieve the metadata either from a PostgreSQL source or a file stored as a Google Cloud Storage object.

The purpose of ingesting metadata is to leverage on the INTIENT platform’s data cataloguing capabilities by centralising all the metadata information to a single source of truth on the platform’s database.

High level USER FLOW
2a. SETTING GOALS FOR MVP

Metadata Source feature set up is similar to that of Data Source with only a few minor differences. It’s main use is for tracking ingested data that may be crucial for reproducibility especially in the line of research.

MVP GOAL
Minimal Viable Product for launch

Users can ingest metadata from their database into the platform for tracking data that may be crucial for reproducibility.

MUST HAVE FEATURE
Features that makes the product
  • User must be able to create a metadata source
  • User must be able to manage a metadata source (CRUD- Create, Read, Update, Delete)
  • User must be able set up a data connection
  • User must be able to set up ingestion frequency:
  • Create a Job
  • Set an ingestion schedule
  • User must be able to monitor the source:
  • Job monitoring:
  • User must be able to view all runs’ status within a job
SHOULD HAVE FEATURE
Features that has high user experience impact
  • User can add and create multiple job runs
    (Use case: As a user, I’d like to get the deltas of different data that is to be ingested every 24 hours but run a complete refresh every month)
  • User can make edits to their data connection as and when they wish            
    (Use case: As a user, I’d like to update my data connection for security purposes)
  • User can make edits to their metadata ingestion as and when
    (Use case: As a user, I wish to disable my metadata ingestion as there isn’t any new/ recent data for my source to ingest)
  • User can select their desired timezone to work with         
    (Use case: As a user, I wish to set the workspace to display in my timezone for ease of use)
2b. USER FLOW
See section 1c. for Meta Data Source’s Site Map.

Metadata Source also faced similar circumstances as Data Source. They too had to be divided into 2 developmental stages due to a tight delivery timeline and limited resources- interim and MVP. Quick note: Metadata source was build by a separate engineering team from Data Source.

Interim Stage User Flow

Creating a MetaData Source was consolidated into a form rather than a step. The reason being that Metadata didn’t require the user to set up a data connection the same way data source requires. It was also build by a different team. To set up the connection, user needs to upload jSON file, hence the form-like setup. However, this flow wasn’t perfect nor was it consistent. It was nonetheless a quick and dirty flow that is still somewhat workable and manageable for the user to follow through.

MVP User Flow

Metadata Source isn’t too far removed from Data Source and have more similarities than differences. Since data source is much more frequently used than Metadata in terms of use cases, the overall flow shouldn’t differ too much from what the user is used to. Hence, Metadata is setup to have flexible configuration much like how Data Source MVP flow has it.

2c. IMPROVEMENT BREAKDOWN

Metadata Source and Data Source bears a lot of similarities. Hence, it is natural for Metadata Source to adopt a similar layout and flow. Below are the examples that I took to further improve Metadata Source from the interim stage allowing the feature to scale and user tasks to be small and manageable.

Create Flow
Job Set Up
Read and Edit State
Consistent Page Flow
3. WHAT IS DATA ENTITLEMENT

Data Entitlement is a feature where the user can hide or restrict sensitive data from specific groups of users. The restriction is enforced at the data source level with configurable policies that are applied throughout the platform. Restrictions can be configured specifically from types of databases to drilled down information listed inside a table row or cell based on the filtering criteria. A common use case would be, when highly sensitive data needs to be hidden from external users or the public eye.

Note: Please note that this build release for Data Entitlement is currently limited.

ConText Setting: What Are Policies

Policies are a governing set of laws and restrictions that are applied to the whole platform housed under each respective data source. Each policy defines what each group of users can and cannot see. Depending on the level of sensitivity of the data, restrictions can be applied on the type of database, individual tables and columns within a database, to the specific information that is listed in a cell. Users are highly encouraged to practice good policy management habits and avoid overcrowding a policy with too many restrictions.

The current application for this build release does not yet have the ability to completely hide these sensitive data. Rather, it only can mask over restricted data telling user groups that they are only for special access only. Users can create their own masking messages to suit their needs. However do note that the message masking will be used universally across the whole platform.

High level USER FLOW
3a. SETTING GOALS FOR MVP

Metadata Source feature set up is similar to that of Data Source with only a few minor differences. It’s main use is for tracking ingested data that may be crucial for reproducibility especially in the line of research.

MVP GOAL
Minimal Viable Product for launch

Users can impose restrictions on specific users and roles by implementing policies (a course or principle of action adopted) to govern access to specific data.

MUST HAVE FEATURE REQUIREMENT
Features that makes the product
  • User must be able to create a policy
  • User must be able to manage a policy (CRUD- Create, Read, Update, Delete)
  • User must be able to assign user/ user roles
  • User must be able to enable/ disable access to:
  • Databases within the source
  • Tables within the source
  • User must be able to create specific restrictions to the table:
  • Restriction to columns
  • Restriction to rows
  • Restriction to cells
  • User must be able to download policy
  • User must be able to audit policy
3b. IDEATION

As this differs greatly from the usual permission access that many SaaS tools have out there in the market, it requires a deeper understanding of how data is being circulated within large organisations without hindering collaboration across all fronts. This is where I sat down together with the lead engineer and product manager to determine how the information lays out for a more intuitive user experience. Unfortunately there isn’t many product examples out there for us to reference by, hence we had to think outside of the box while ensuring that the feature is able to scale. Below were a couple of initial ideas and “sketches” as we tried to map out what made sense.

Information Hierarchy

We knew that the data source was the highest level. Each data source would probably have a list of policies. Each hold a different restriction for different type of people and role. We also identified that inside each data source are several links to different data bases. Each database has their own respective list of tables and within each table we can add finer restrictions to specific column or row.

Rough Site Map

Asides from creating and managing policies, we’ve also identified several minimally viable features that would appeal to users as an entry level product:

From there, we try to determine the basic site map and where these features would fall under.

UI Functionality Map

The next part was a little challenging as we needed to define which UI we needed to make the app. It goes as simple to decide if we need an edit function in a page or a delete. It also determines how the page would likely flow- whether it only requires a pop up modal, a tab or a new page altogether.

A digram tree like this is meant to help the developers prep the necessary functions first while waiting for the screen designs. While the diagram shown here isn’t perfect nor the final, it is part of the refinement process. Beside the diagram is a rough wireframe sketch to visualise how some of these UI is to be placed.

3c. SITE MAP AND NAVIGATION

After much discussion, we finally settled on the desired flow and levels of navigation. This ensures that as complex as this SaaS app may seem to be, is actually intuitive to use. Inside the diagram are rough wireframe screens to act as a guide for frontend engineers who were waiting on the high fidelity screen designs. The Site Map also included key functionalities such as “Add Users” and “Download Policy” to give prep to the engineers of what will be required in the final design screens.

3d. USER FLOW

Data Entitlement was also split into 2 development stages- MVP and future enhancements. However unlike Data Source and Metadata Source, Data Entitlement had no prototype and was proposed as an additional feature for the alpha launch by leadership and product owners. During the time when both Data Source and Metadata Source were both going through development, there was a lot of urgency to complete the user flow and kick start the development for Data Entitlement. The user flow was completely formulated from scratch, though not entirely perfect, the enhancements required at a later stage are minimal consisting of mostly cosmetic improvements to the overall experience.

MVP User Flow

In the flow, we have decided to reduce the number of screens and simplifying the flow by having a “save” button. While this may be confusing, it still meets the product goals by allowing users to save their edits.  Each save overwrites the last and all user actions are recorded in an audit trail. That way, users can monitor and keep track of all changes made inside the Data Entitlement app.

Future Enhancement User Flow

In the future enhancement flow, I had addressed the confusion of the “Save” button with proper confirmation and affirmation. While there isn’t major improvements done to the flow, having improved or desired features does ease the users’ experience. One such example is the improvement to the selection of user roles and ID. To enable adding users efficiently, I’ve designed a multi-select and add option. This cuts down time spent on having to do them one at time especially in large organisations where collaboration can be fairly large as a project team.

3e. IMPROVEMENT BREAKDOWN

Metadata Source and Data Source bears a lot of similarities. Hence, it is natural for Metadata Source to adopt a similar layout and flow. Below are the examples that I took to further improve Metadata Source from the interim stage allowing the feature to scale and user tasks to be small and manageable.

Edit Policy
Add User/ User Roles
Unsave Channges
DATA EXPLORATION a.k.a DATA FABRIC

The Data Exploration solution enables users to explore external data sources, such as ChEMBL, UniProt, PubChem to help with Target or Compound selection. After creating an initial set of entities, users can navigate through the Data Exploration’s Target or Compound Dashboard to find associated information, such as similar compounds or related Clinical Trial Targets, to expand their selection. Both dashboards also offer Knowledge Graph capabilities to find additional connections and relationships between different entities.

KEY FEATURE CONTEXT
Data Harmonization

Brings together multiple public data sources, such as ChEMBL, UniProt, PubChem, and ClinicalTrials.gov into one cohesive view. This allows the data from different sources to be integrated together to find different associations. Users can use the ‘Property Graph View’ to explore the aggregated data to draw valuable insights and make well-informed decisions.

Data Harmonization

Provides a Q&A semantic search engine which allows users to ask research questions utilizing a BioBERT model. It searches across 16,848,839 PubMed articles from 2000 to 2019, and 349,294 ClinicalTrials.gov studies up till 19th August 2020, and returns relevant articles and studies to users for consumption.

Data Harmonization

Provides a Chemical Search engine for users to draw their desired chemical structure as a search input, and search for compounds in the database. Users can choose from three different modes of search, namely ‘Exact Match’, ‘Similarity’ and ‘Substructure’. ‘Exact Match’ returns compounds that exactly match when they are completely equal. ‘Similarity’ returns compounds that are similar to the compound drawn and a similarity score threshold can be used to filter the resulting compound list. Lastly, ‘Substructure’ returns compounds that contains any special molecular features present in the query.

HIGH LEVEL SITE MAP
UNDERSTANDING THE CONCEPT OF ENTITIES

The INTIENT Research platform is made up of multiple levels of entity types: Targets, Compounds and Assays, all of which have been successfully harmonized together. The current GA 1.1 Release is primarily driven by Targets and Compounds, with supporting information from the Assay entity type. This is made possible with the use of a Knowledge Graph, which depicts connections between the various entity types. Through a single entity, the user can explore other connected entities and their supplementary details to rapidly gather insights.

Target

A Target can be a single protein or a protein complex which a subunit(s) of a compound or drug binds to. Target information is primarily consolidated from ChEMBL and UniProt. Additional Target details can be accessed via the ‘LinkOut Details’ tab that crosslinks to other public databases. Bioassay details in relation to the target are also included.

Compound

Compounds are mostly small molecules and proteins, and a smaller portion consists of antibodies, oligonucleotide, oligosaccharide, enzymes and cells. Compound information is consolidated from ChEMBL and PubChem. Bioassay details in relation to the compound are also included.

SCOPING

As a new product, we had to define its features and map it to the MVP goals. I started out by breaking down goals into small user stories and map them to the desired flow. Each user story will then have a rough list of screens and functions that is likely to fulfil the user story.  Ths list is then reviewed by the product and project managers. Once approved, it is then mapped to the development roadmap where the designers immediately get to work on it while the engineers conduct a preliminary list of features to prepare for.

SITE MAP AND CONCEPT SCREENS

After scoping out what needs to be designed,  the next thing was to draft concept screens and map it to a site map. This is to ensure that all main features are accounted for and to  also help prepare frontend engineers for what will be needed. The challenge was not having much to reference from, but to take inspiration from other data visual SaaS products. The display of the knowledge graphs on the Target Dashboard were inspired by BI charts.

The other challenge was to ensure that all data charts line up evenly while allowing users to customise their dashboard with relevant data visuals. I introduced a method/ formula that allows for better responsiveness depending on type of graph- It uss the ratio 1:2 block. The logic was for all charts to have the same height.  Another logic was to have smaller data visuals (eg. donut charts ) to expand and occupy the full width of the display area should there be an empty space next to it. These smaller charts generally takes up half the width of the display area when there’s another small chart right next to them. This is to help mitigate the complex display of staking graphs evenly.

Another limitation faced was the number of allowed variables to be displayed. Larger visual data charts such as bar graphs may take up the whole width of the display area, however they are still limited in space and may not have the full data report. In the MVP build, users can only customise these charts to display a maximum of up to 10 variables (subjected to chart type) that they wish to monitor. Future scope eventually would allow more in-depth and comprehensive display of variables by introducing an independent page for each of the graphs that has more than 10 variables. While the dashboard would only display up to 10 selected variables that the user primarily wants to monitor, the independent page would be relevant if the user wishes to view a full report by clicking on the graph for more information.

USER FLOW

With concept screens to help steer the experience of the product, it was much easier to fill in the gaps with detailed screens and smaller feature functions into a user flow. In this flow, I have demonstrated how collaborative research can be achieved between a biologist, a biotechnician and a chemist on this platform. I’ve also included additional sub-flows on how our users can utilise the specialised search engine to narrow down their search.

General Flow

The general flow shows how a user can kickstart a research collaboration by creating an entity (Target or Compound) set on the platform they wish to monitor. Depending on the type of set, users can customise and configure their dashboard to display relevant data information to project trends and comparisons. They can also conduct complex searches for compounds specifically via drawings and matching variables base on different types of criteria. Users can also draw up a property graphs to understand the relationship between 2 different types of entities.

Search Via Entity Details

This flow shows how users can conduct detailed search via entity details. This type of search is associative and usually already exist within the user’s list of entities to monitor. This helps the user to narrow the search results to display only information that are directly associated with that specific entity. This flow is for users who already know what they are looking for.

BERT Search

BERT stands for Bidirectional Encoder Representation from Transformers. It is an algorithm that helps to understand the context of words in searches, searches through millions of articles, and returns relevant results to the users. In this case, the BERT model is trained specifically on research articles from PubMed and clinical trials from ClinicalTrials.gov. Users can submit conversational queries related to drug discovery and development, through a search field on the platform, and expect a return of relevant research articles and clinical trials. This saves users a significant amount time, as compared to using conventional search methods.

There is a special function within the BERT search that also allows users to highlight key words which are recognized by the system as an entity. Users can ultimately select and choose words off titles, abstracts and paragraphs inside the search results page and add them into a set.

Conclusion

This project was one of the most challenging of my career to date. It was a very technical heavy SaaS platform targeted at very specific specialised users. It is also one of the biggest production team that spans across multiple different countries that I have collaborated and co-ordinated with by far. The biggest challenge of this project was the limited amount of resources and a very tight timeline to deliver a launch. Despite being a large production team, we had 3 different products (Admin app, Data Fabric/ Exploration, Target ID) within the platform that had to be designed and built.

Thankfully, my focus was mostly on the Admin App and Data Fabric. Target ID was an already working product that only needed to be reskinned using the new design framework before being integrated into the platform. Asides from leading the Design Team, I was also tasked with developing a training documentation for user onboarding. While documentation is great for the initial launch, it is not the most ideal method for complex onboarding.

Towards the end of my term with the project, I have proposed a training packages that will help ease onboarding friction for new users and demonstrate the platform’s value add to potential clients and stakeholders. The training package that I proposed help define the skills and knowledge needed by learners to perform a job and how to apply them in a workplace context. They also educate stakeholders on how the platform can benefit their existing R&D processes by delivering time to value.

In the training package proposal, I utilised the Kirkpatrick four levels model to evaluate training success. This allows us to also test the platform’s value and engagement with our users. As the platform scales, the type of training materials should also adapt accordingly to enable seamless onboarding and adoption.