Bioinformatics Core

Bioinformatics analysis, compute infrastructure and training.

The Bioinformatics Core supports research by providing data analysis expertise and high-performance compute infrastructure. We collaborate on research projects from inception through to publication, offering advice on experimental design, managing large amounts of data and performing computational analyses.

We have extensive expertise with high throughput sequencing experiments, proteomics/transcriptomics/genomics datasets and data from emerging technologies and we develop workflows, visualisations and interactive applications for the processing and interrogation of these datasets.

Image
Bioinformatics

The core takes a lead role in encouraging researchers to develop their own skills in bioinformatics by offering regular training courses and networking events as well as promoting the tenets of reproducible research. 

The Bioinformatics Core is managed by Shaun Webb and supported by two experienced bioinformaticians, Hywel Dunn-Davies and Daniel Robertson. 

What we provide

  • Bioinformatics compute infrastructure
    • Access to dedicated bioinformatics analysis servers

    • Data hosting and management

  • Image
    bifx_training
    Research collaborations  
    • Advice on experimental design

    • Analysis of high throughput experiments

      • All types of sequencing experiments (short-read, long-read, nanopore, single-cell etc.)

      • Standard analysis workflows (ChIP-seq, RNA-seq, Methyl-seq, CRAC, ATAC-seq etc.)

      • Downstream analysis (visualisations, statistics, comparative analysis etc.)

      • Data management (download, storage, archiving and publication of data)    

  • Bespoke data analysis  
    • Software/workflow development

    • Data visualisations

    • Interactive web applications for exploring data    

  • Training and teaching  
    • Regular workshops and events to empower users to perform their own bioinformatics

    • Bioinformatics Live seminars to support a knowledge network amongst our users

    • Training on request for individuals or groups

    • See DRP-HCB Training Courses for details of upcoming events .  

 

Resources available on the DRP-HCB bioinformatics servers

  • Large memory (1Tb) and multiple CPU cores (256)
  • SSH login to a UNIX command-line environment
  • Graphical access via X2Go
  • Version-controlled bioinformatics software 
  • Automated pipelines with NextFlow and Snakemake
  • Web servers:
    • RStudio: Data analysis with R programming language
    • R Shiny: Interactive applications
    • JupyterLab: Interactive Python notebooks
    • Apache: Access and share files via the internet
    • HiGlass: Visualise HiC data

Resources

Bioinformatics resources, including best practices for using our servers and links to training sites, are available on our GitHub webpage

Access to the DRP-HCB Bioinformatics Core

  • To access the Bioinformatics Core, you must first register a project with the DRP-HCB
  • Access to bioinformatics servers
    • Contact drphcb@ed.ac.uk to request an account
    • Bioinformatics server users must agree to our user policy and complete our introductory course.
  • Research collaborations