Robur Health
Gains in life expectancy are incurring a cost of years lived in poor health. This cost grows with the prevalence of conditions tied to both metabolism and to aging itself. These conditions represent long problems for our collective health, and are not currently untethered by fit-for-purpose therapeutic discovery strategies.
The solution is a first-in-class technology platform which employs therapeutics to reach a metabolic optimum. This unconventional technology taps into the growth of compute power and open data, and overlays branches of network theory, machine learning, and bioengineering onto first principles of biology.
While no country has reversed rising obesity rates in this century, pharmaceuticals have proven to be a scaleable mechanism to control other metabolic risk factors in the past. Future success will not come from old techniques, but a new spin on proven tactics can imbue a stabilizing force on our health and economic security.
Increased life expectancy and reduced deaths from undernutrition, infections, and at birth represent massive human achievements. Consequently, chronic and non-communicable diseases are now an unignorable share of disease burden. Over 100 million Americans exhibit pre-diabetes or diabetes, and even reversal of weight gain in adolescents and young adults is improbable. Furthermore, these burdens continue to grow under the weight of an expanding and aging global population. Lifestyle modifications are only as effective as our ability to change behavior, and policy may not keep pace with our evolving health needs. As the dynamic modern environment exerts increasing pressures on our largely static genome, we should not expect these forces to lift themselves. Pharmaceuticals can alleviate these forces directly and at scale, but the approaches to discover new drugs are not intentionally constructed to restore metabolic health, and to compensate for lack of physical activity or unhealthy food. Additionally, mitigating detrimental metabolic changes with age falls outside the industry’s purview, and while markers of metabolic aging have recently been discovered, these markers are not yet actionable.
Aging populations now manage chronic conditions for unprecedented periods of time, and roughly half of individuals over 65 years of age take three or more medications. Medications are a positive force for health, but as the gatekeepers of this force, care providers are only as effective as the available therapeutic toolkit. The pharmaceutical industry forges these tools through both new drug discovery and repurposing of existing drugs, which respectively serve unmet health needs and reduce risks from unnecessary prescriptions and poly-medication. This approach expands the therapeutic toolkit by operating broadly across the industry’s research and development pipeline, and by forging new tools for metabolic diseases and detrimental metabolic changes with age. This unique blend of tech and biotech can refuel the industry’s starving metabolic programs, and opportunistically meets a nascent biotech space focused on aging. Addressing the pain points of the biopharma industry subsequently empowers care providers to serve the health needs of aging populations living more years in poor metabolic health.
This solution discovers therapeutic strategies to obtain metabolic objectives. The set of strategies spans genetic targets, molecules, and combinations which elicit a desired metabolic signature, at genomic depth. Signatures across diverse experimental designs of drugs, diets, genetic manipulations, and other forms of exposures generate a computational map. This map then pinpoints therapies which closely mimic desired signatures, such as from diets or youth. This expansive computational approach demands advanced compute hardware acceleration, and a method to integrate complex experimental data from multiple sources, such as activities of genes and properties of enzymes. This innovation optimizes these parameters through an artificial learning system. Guided by human “in-the-loop” updates, this system iteratively refines and tightens a high-dimensional parameter space. These updates stop once the parameters increase the classification accuracy of metabolic signatures across biological groups. A deep representation of metabolism captures more information than reductionist physiological measures, such as glucose or total lipid concentrations. This representation not only redefines physiology, but unlocks a redefinition of therapeutics. A therapeutic can now be defined not by its impact on glucose levels in a diabetic patient, but by its impact on the patient’s metabolic signature. The major logical leap of this redefinition is the selection of a therapeutic which shifts the signature towards health, or away from disease. Recent computational proof-of-concepts suggest this unique approach may be useful for drug discovery.
- Reduce the incidence of NCDs from air pollution, lack of exercise, or unhealthy food
- Prototype
- New technology
- Artificial Intelligence
- Machine Learning
- Big Data
- United States
- United States
- For-profit
Drug discovery informatics companies now serve each step of the drug pipeline, from target identification to post-approval. Orthogonally, these companies innovate across a data continuum, from literature-based evidence to healthcare outcomes. The competitive landscape is densest at the intersection of drug optimization through clinical development from scientific and patient data. Robur’s technology performs target and drug identification from scientific data, with additional applications for drug repurposing at clinical stages. Thus, this technology primarily operates at earlier and riskier stages of discovery, where there is less competitive pressure. Additionally, computational chemistry underpins many competing drug identification and optimization technologies, which limits their applications to single targets and modalities. The major value proposition of this technology is the reduced time and cost of discovery, with bespoke design for metabolic therapeutics. These assets can be advanced through capital-intensive pre-clinical and clinical trials through out-licensing. Additionally, technology-based collaborations now represent an additional business model, and this dual-strategy is increasingly common among computational biology and artificial intelligence startups.
- Technology
- Funding and revenue model
- Talent or board members
Drug discovery is fueled by both massive and diverse experimental data, across genes, proteins, and metabolites. To date, there is no consensus on the optimal integration parameters across multi-modal biological data. Many published approaches rely on one data type, or justify parameters on biological assumptions. This innovation adopts a data-driven approach to test these assumptions and scale with growing datasets. Rather than choosing one method, a high-dimensional parameter space is iteratively explored and optimized through human "in-the-loop" learning. This approach both sets the basis for fully-automated learning approaches, and for a centralized, data-agnostic integration mechanism. Continued development of these algorithms will be necessary as the volume and diversity of data types grow with new nucleotide sequencing technologies and systems-level profiling of proteins and metabolites.
This innovation currently models large biological systems through probabilistic graphical modeling. While data processing is fast, these graphical models pose heavy computational burdens. Fortunately, these algorithms can be parallelized, both in their graphical structure, and in their underlying matrix operations. These algorithms will benefit from distributed computing and hardware acceleration on large cloud compute systems.
Drug discovery is fueled by both massive and diverse experimental data, across genes, proteins, and metabolites. To date, there is no consensus on the optimal integration parameters across multi-modal biological data. Many published approaches rely on one data type, or justify parameters on biological assumptions. This innovation adopts a data-driven approach to test these assumptions and scale with growing datasets. Rather than choosing one method, a high-dimensional parameter space is iteratively explored and optimized through human "in-the-loop" learning. This approach both sets the basis for fully-automated learning approaches, and for a centralized, data-agnostic integration mechanism. Continued development of these algorithms will be necessary as the volume and diversity of data types grow with new nucleotide sequencing technologies and systems-level profiling of proteins and metabolites.
This innovation currently models large biological systems through probabilistic graphical modeling. While data processing is fast, these graphical models pose heavy computational burdens. Fortunately, these algorithms can be parallelized, both in their graphical structure, and in their underlying matrix operations. These algorithms will benefit from distributed computing and hardware acceleration on large cloud compute systems.