The Blog
MindWalk is a biointelligence company uniting AI, multi-omics data, and advanced lab research into a customizable ecosystem for biologics discovery and development.
×
in 1999, an innovative collaboration between 10 of the world’s largest pharmaceutical companies, the world’s largest medical research charity, and five leading academic centres emerged in the form of the snp consortium (tsc). focused on advancing the field of medicine and development of genetic-based diagnostics and therapeutics, the tsc aims to develop a high-density, single nucleotide polymorphism (snp) map of the human genome. a wall street journal article described how the two-year, $45 million program to create a map of genetic landmarks would usher in a new era of personal medicines. the following year, with the announcement of the "working draft" sequence, the consortium collaborated with the human genome project to accelerate the construction of a higher-density snp map. in 2002, a summary from the chairman of the consortium described how the program identified 1.7 million common snps, significantly outperformingits original objective to identify 300,000. he also observed that creating a high-quality snp map for the public domain would facilitate novel diagnostic tests, new ways to intervene in disease processes, and development of new medicines to personalise therapies. in the 20 years since that milestone in modern personalised medicine, there have been several significant advances. today, the use of genotyping and genomics has progressed many cancer treatments from blanket approaches to more patient-centred models. the ability to decode dna and identify mutations has opened up the possibility of developing therapies that address those specific mutations. the sequencing of the human genome introduced the concept of the druggable gene and advanced the field of pharmacogenomics by enabling the exploration of the entire genome in terms of response to a medication, rather than to just a few candidate loci. precision vs. personalisation in medicine the broad consensus seems to be that these terms are interchangeable. for instance, the national human genome research institute highlights that the terms are generally considered analogous to personalised medicine or individualised medicine. additionally, the national cancer institute, american cancer society and federal drug administration include references to personalised medicine and personalised care. in fact, the view that the terms are interchangeable, or at least very similar, is common across a host of international institutions. however, for at least one organization, a clear distinction between, and preference for, one term over the other has been noted. this comes from the european society for medical oncology (esmo), with the unambiguous statement that precision medicine is preferred to personalised medicine. according to esmo, these concepts ‘generated the greatest discussion’ during the creation of their glossary and their decision to go with precision medicine came down to these three reasons: the term ‘personalised’ could be misinterpreted to imply that treatments and preventions are being developed uniquely for each individual. personalised medicine describes all modern oncology given that personal preference, cognitive aspects, and co-morbidities are considered alongside treatment and disease factors. in this context, personalised medicine describes the holistic approach of which biomarker-based precision medicine is just one part. precision medicine communicates the highly accurate nature of new technologies used in base pair resolution dissection of cancer genomes. and finally, according to the national research council, precision medicine “does not literally mean the creation of drugs or medical devices that are unique to a patient, but rather the ability to classify individuals into subpopulations that differ in their susceptibility to a particular disease, in the biology and/or prognosis of those diseases they may develop, or in their response to a specific treatment.” key elements of precision medicine there are several models that seek to break down the complexity of the precision medicine ecosystem into a sequence of linked components. for instance, the university of california, san francisco (ucsf) envisions precision medicine as a fluid, circular process that informs both life sciences research and healthcare decision-making at the level of the individuals or populations. this model integrates findings from basic, clinical, and population sciences research; data from digital health, omics technologies, imaging, and computational health sciences; and ethical and legal guidelines into a "google maps for health" knowledge network. source: precision medicine at ucsf in the publication, precision medicine: from science to value, authors ginsburg and phillips outline a knowledge-generating, learning health system model. in this model, information is constantly being generated and looped between clinical practice and research to improve the efficiency and effectiveness of precision medicine. this enables researchers to leverage data derived from clinical care settings, while clinicians get access to a vast knowledge base curated from research laboratories. participation in this system could be extended further to include industry, government agencies, policymakers, regulators, providers, payers, etc., to create a collaborative and productive precision medicine ecosystem. source: precision medicine: from science to value the uc davis model visualises precision medicine as the ‘intersection between people, their environment, the changes in their markers of health and illness, and their social and behavioural factors over time’. this model focuses on four key components: 1) patient-related data from electronic health records, 2) scientific markers of health and illness including genetics, genomics, metabolomics, phenomics, pharmacogenomics, etc. 3) environmental exposure and influence on persons and populations such as the internal environment (e.g., microbiomes) and the external environment (e.g., socio-economics) and, 4) behavioural health factors (e.g., life choices). source: uc davis health another precision medicine approach discussed in a recent brookings report is presented as a simple, four-stage pipeline envisioned to help companies ethically innovate and equitably deploy precision medicine. the first stage, data acquisition and storage, deals with the aggregation of big data and ownership, privacy, sovereignty, storage, and movement of this data. the second stage pertains to information access and research and the need to balance healthcare innovation with adequate oversight and protection. in the third clinical trials and commercialization stage, a robust framework is in place to ensure the safety, efficacy, and durability of precision medicine treatments, as well as the commercialization of individualised products. the final stage involves evaluating societal benefits, including investments and innovations in healthcare systems with an aim toward equitable precision medicine, so that products and treatments reach all patients with unmet medical needs. integrating precision medicine and healthcare systems the true potential for a patient-centric model such as precision medicine can only be realised when physicians are able to apply research insights into clinical decisions at the point of care. however, despite huge scientific and technological breakthroughs over the past two decades, healthcare providers face multiple challenges in integrating novel personalised medicine technologies and practices. a study of a representative sample of us-based health systems revealed that, despite widespread integration efforts, the clinical implementation of personalised medicine was measurable but incomplete system-wide. this practice gap could be attributed to any number of limitations and challenges, and addressing these will have to become a priority if the breakthroughs in precision medicine are to be translated into improved care for patients.
nearly a decade ago, the human genome project successfully delivered a baseline definition of the dna sequences in the entire human genome. population genomics extends the scope of genomics research beyond baseline data to get a better understanding of gene variability at the level of individuals, populations, and continents. take india for example, where an ambitious program called indigen has been rolled out to map whole-genome sequences across different populations in the country. the first phase of the program, involving extensive computation analysis of the 1,029 sequenced genomes from india, identified 55,898,122 single-nucleotide variants in the india genome dataset, 32% of which were unique to the sequence samples from india. these findings are expected to provide the foundations for what will become an india-centric population-scale genomics initiative. population genomics opens up a range of region-specific opportunities such as identifying genes responsible for complex diseases, predicting and mitigating disease outbreaks, focusing on country-level drug development, usage, and dosing guidelines, and formulating precision public health strategies that deliver optimal value for the population. as a result, several countries across the globe have launched their own initiatives for the large-scale comparison of dna sequences in local populations. the population genomics rush images source: iqvia the international hapmap project, launched in 2002 as a collaborative program of scientists from public and private organisations across six countries, is one of the earliest population-scale genomics programs. a 2020 analysis of the global genomics landscape reported close to 190 global genomic initiatives, with the u.s. and europe accounting for an overwhelming majority of these programs. several countries have already launched large-scale sequencing programs such as all of us (u.s.), genomics england, genome of greece, dna do brasil, turkish genome project, and the saudi human genome program, to name just a few. then there is the “1+ million genomes” initiative in the eu to create a cross-border network of national genome cohorts to unify population-scale data from several national initiatives. there is a spectrum of objectives being collectively targeted by these projects including analysing normal and pathological genomic variation, improving infrastructure, and enabling personalised medicine. as a result, population genomics data is exploding. an estimated 40 million human genomes have been sequenced as of 2020 with the number of analysed genomes expected to grow to 52 million by 2025. this exponential increase in population-scale data presents significant challenges, both in terms of crunching raw data at scale and in analysing and interpreting complex datasets. the analytics challenge in population genomics genomic data volumes have been increasing exponentially over the past decade, thanks in part to the plummeting costs of next-generation sequencing technologies. then there is the ever-expanding scope of health-related data, such as data from electronic health records biomonitoring devices etc., that are becoming extremely valuable for population-scale research. however, conventional integrative analysis techniques and computational methods that worked well with traditional genomics data are ill-equipped to deal with the unique data characteristics and overwhelming volumes of ngs and digital-era data. data exploration and analysis already lag data generation by a significant order of magnitude – and that deficit will only be exacerbated as we transition from ngs to third-generation sequencing technologies. image source: sciencedirect over the years, several de facto standards have emerged for processing genomics big data. but in spite of the significant progress that has been made in this context, the gap between data generation and data exploration continues to grow. most large institutions are already heavily invested in hardware/software infrastructure and in standardised workflows for genomic data analysis. a wholesale remapping of these investments to integrate agility, flexibility, and versatility features required for big data genomics is just plain impractical. integrating a variety of datasets from multiple external sources is a hallmark of modern genomics research and still represents a fundamental challenge for genomic analyses workflows. the biggest challenge, however, is the demand for extremely specialized and scarce bioinformatics talent to build bespoke analytics pipelines for each research project. this significantly restricts the pace of progress in genomics research. for data analysis to catch up with data acquisition, researchers need access to an easy-to-use powerful solution that spans the entire workflow – from raw data analysis to data exploration and insight. the mindwalk “one model” approach at mindwalk, we offer an end-to-end, self-service saas platform that unifies all components of the genomics analysis and research workflow into one intuitive, comprehensive, and powerful solution. we designed the platform to address every pain point in the genomics research value chain. for starters, it doesn't matter if you’re a seasoned bioinformatician or a budding geneticist. our platform has a learning curve that’s as easy to master as google search. at mindwalk we believe that wrangling data is a tedious chore best left to technology. to that end, we have precomputed and indexed nearly 350 million sequences available across 11 public databases into one proprietary knowledge database that is continuously reviewed and updated. ninety percent of population data from currently ongoing programs is soon expected to be publicly available, which means it will probably just be a click away. in addition, you can add self-owned databases with just one click to combine them with publicly available datasets to accelerate time-to-insight. if it’s genomic data, we’ll make it computable. with the mindwalk solution, you can use sequence or text to search through volumes of sequence data and instantly retrieve all pertinent information about alignments, similarities, and differences in sequences in a matter of seconds. no more choosing algorithms and building complex pipelines. our technology enables both experts and enthusiasts to focus entirely on their research objectives without being side-tracked by the technology. the mindwalk platform provides you with a range of intuitive, powerful, versatile, and multidimensional tools that allow you to define the scope, focus, and pace of your research without being restricted by any technological limitations. parse, slice, dice, sort, filter, drill down, pan out, and do whatever it takes to define and pursue the research pathways that you think have the maximum potential for a breakthrough. leverage the power of the mindwalk platform’s state-of-the-art ai tools to quickly and intuitively synthesise knowledge from a multitude of data sources and across structured and unstructured data types. with mindwalk research, researchers and bioinformaticians finally have access to a user-centric, multidimensional, secure, end-to-end data-to-insight research platform that enables a personalised and productive research experience by leveraging the power of modern digital technologies in the background harnessing the potential of population genomics population genomic data will continue to grow as more and more countries, especially in the developing world, realise the positive impact large-scale sequencing can have on genomics research, personalised patient care and public precision health. however, data science is key to realising the inherent value of genomic data at scale. conventional approaches to genomic research and analysis are severely limited in terms of their ability to efficiently extract value from genomics big data. and research is often hampered by the need for highly skilled human capital that is hard to come by. with the mindwalk platform, genomics research finally has an integrated solution that incorporates all research-related workflows, unifies discrete data sources and provides all the tools, features and functionality required for researchers to focus on what really matters – pushing the boundaries of genomics research, personalised patient care, and public precision health.
Topic: Precision medicine
Sorry. There were no results for your query.