Showing 2 open source projects for "code analysis"

View related business solutions
  • Our Free Plans just got better! | Auth0 Icon
    Our Free Plans just got better! | Auth0

    With up to 25k MAUs and unlimited Okta connections, our Free Plan lets you focus on what you do best—building great apps.

    You asked, we delivered! Auth0 is excited to expand our Free and Paid plans to include more options so you can focus on building, deploying, and scaling applications without having to worry about your security. Auth0 now, thank yourself later.
    Try free now
  • Run Any Workload on Compute Engine VMs Icon
    Run Any Workload on Compute Engine VMs

    From dev environments to AI training, choose preset or custom VMs with 1–96 vCPUs and industry-leading 99.95% uptime SLA.

    Compute Engine delivers high-performance virtual machines for web apps, databases, containers, and AI workloads. Choose from general-purpose, compute-optimized, or GPU/TPU-accelerated machine types—or build custom VMs to match your exact specs. With live migration and automatic failover, your workloads stay online. New customers get $300 in free credits.
    Try Compute Engine
  • 1
    clusterProfiler

    clusterProfiler

    A universal enrichment tool for interpreting omics data

    clusterProfiler is an R/Bioconductor package that provides a unified workflow for functional enrichment analysis to interpret high-throughput omics results. It supports both over-representation analysis and gene set enrichment analysis, letting you work with unranked gene lists or ranked statistics from differential pipelines. The package connects to multiple knowledge bases—such as Gene Ontology, KEGG, Reactome, Disease Ontology, MeSH and others—through a consistent interface so you can query different biological lenses without rewriting code.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Reproducible-research

    Reproducible-research

    A Reproducible Data Analysis Workflow with R Markdown, Git, Make, etc.

    ...The workflow ensures meeting the primary goals that 1) the reporting of statistical results is consistent with the actual statistical results (dynamic report generation), 2) the analysis exactly reproduces at a later point in time even if the computing platform or software is changed (computational reproducibility), and 3) changes at any time (during development and post-publication) are tracked, tagged, and documented while earlier versions of both data and code remain accessible.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • Next
MongoDB Logo MongoDB