Biotech
Last updated
Last updated
Freenome is looking for engineers to help us develop software to combat cancer and other age-related diseases. You will work as part of an interdisciplinary team of engineers and scientists building our internal machine learning platform. As an early team member, you’ll take the lead on major projects and collaborate actively with our world-class team of engineers, scientists, designers and product managers. You’ll design and build the systems used to power our Discovery Platform, the heart of Freenome’s experimental analyses. Since we’re a small team, you’ll also have an opportunity to determine the course of a broad range of projects and help shape the direction of the Engineering team at Freenome.
Freenome’s software systems provide the “nervous system” for the company by tracking sample analysis from start to finish, empowering and assisting lab technicians and scientists, and automating our growing collection of cancer-fighting robots. This nervous system is built using modern software development technologies and methodologies.
Work closely with machine learning, bioinformatics, and product management teams to understand needs and then architect, roadmap, and lead development of the next phase of Freenome’s discovery software platform
Deep understanding of the role of the discovery platform for Freenome’s product development process and partnerships, and guidance of its purposeful evolution in support of these efforts
Own group charter and build a focused, collaborative engineering team
Develop and deploy reliable, maintainable, scalable, and fault-tolerant services
Guide and champion engineering hygiene and culture as a core part of the engineering backbone
Ability to understand, plan, and develop for key aspects of Freenome’s multi-analyze discovery analysis platform:
Heterogeneous data organization, accessibility, and modeling
Rapid, iterative, reproducible experimentation and analysis
Simple navigation to arbitrary states and checkpoints within the analysis tree
Clear interpretation and presentation of discovery insights in reports
5+ years experience as a part of a software development team successfully shipping a machine learning, deep learning, data science, analytical, or similar platform
Management or team lead experience
Knowledge of optimal methods for modern data storage systems, distributed systems, service architecture, and pipelining or workflow management.
Track record of building distributed systems with service endpoints and distributed storage.
Understanding of, and practical experience with, statistical and machine learning methods.
Degree in computer science, mathematics, statistics, or related field or equivalent work experience
Proficiency in a general-purpose programming language: Python, Java, C, C++, etc
Excellent written and verbal communication skills
A mindful, transparent, and humane approach to your work and your interactions with others
Deep knowledge of Python
PostgreSQL or similar relational database experience
Experience with Google Cloud Platform, or another cloud computing service
Domain-specific experience in computational biology, genomics or a related field
Experience in scientific parallel computing
Experience in high-performance computing, including SIMD or GPU performance optimization
Experience with use of automated regression testing, version control, and deployment systems
A hands-on coding question. Writing some light classes, functions etc. No algorithms or anything tricky, just solving a problem with code.
An algorithmic whiteboard question. This requires no coding or coding knowledge. A computer science background will help but is not required. The problem is around the subject matter of bioinformatics analysis but does not require prior knowledge.
A infrastructure design whiteboard question. No coding. This is an exercise on the whiteboard to design the infrastructure and systems needed to create a web app that many users will use. This involves infrastructure, scaling, efficiency, and other considerations for a big user base.
Tools: Luigi, Airflow, Ludwig, Kubeflow, Horizon
https://code.fb.com/core-data/introducing-fblearner-flow-facebook-s-ai-backbone/
https://code.fb.com/ml-applications/horizon/
https://www.reddit.com/r/bioinformatics/comments/5bu61o/anybody_using_luigi_for_their_pipeline
Machine Learning is a method of statistical learning where each instance in a dataset is described by a set of features or attributes. In contrast, the term “Deep Learning” is a method of statistical learning that extracts features or attributes from raw data. Deep Learning does this by utilising neural networks with many hidden layers, big data, and powerful computational resources. The terms seem somewhat interchangeable; however, with Deep Learning methods, the algorithm constructs representations of the data automatically. In contrast, data representations are hard-coded as a set of features in machine learning algorithms, requiring further processes such as feature selection and extraction, (such as PCA).
Airflow
Ludwig
Luigi
Horizon
Kubeflow
Shotgun Sequencing
Special machines, known as sequencing machines are used to extract short random DNA sequences from a particular genome we wish to determine (target genome). Current DNA sequencing technologies cannot read one whole genome at once. It reads small pieces of between 20 and 30000 bases, depending on the technology used. These short pieces are called reads. Special software are used to assemble these reads according to how they overlap, in order to generate continuous strings called contigs. These contigs can be the whole target genome itself, or parts of the genome (as shown in the above figure).
The process of aligning and merging fragments from a longer DNA sequence, in order to reconstruct the original sequence is known as Sequence Assembly.
In order to obtain the whole genome sequence, we may need to generate more and more random reads, until the contigs match to the target genome.
This pattern can be used to structure systems which produce and process a stream of data. Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.
Usage
Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis, and code generation.
Workflows in bioinformatics.
Pipe-filter pattern
This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard pattern consists of 3 main components.
blackboard — a structured global memory containing objects from the solution space
knowledge source — specialized modules with their own representation
control component — selects, configures and executes modules.
All the components have access to the blackboard. Components may produce new data objects that are added to the blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching with the existing knowledge source.
Usage
Speech recognition
Vehicle identification and tracking
Protein structure identification
Sonar signals interpretation.
Blackboard pattern
The table given below summarizes the pros and cons of each architectural pattern.
Bioinformatics for Dummies by Cedric Notredame and Jean-Michel Claverie
Bioinformatics for Beginners: Genes, Genomes, Molecular Evolution, Databases and Analytical Tools by Supratim Choudhuri
Bioinformatics Programming in Python: A Practical Course for Beginners by Ruediger-Marcus Flaig
Bioinformatics Programming Using Python by Mitchell L. Model
Python Programming for Biology: Bioinformatics and Beyond 1st Edition, Kindle Edition
by Tim J. Stevens, Wayne Boucher