Jove
Visualize
Contact Us
JoVE
x logofacebook logolinkedin logoyoutube logo
ABOUT JoVE
OverviewLeadershipBlogJoVE Help Center
AUTHORS
Publishing ProcessEditorial BoardScope & PoliciesPeer ReviewFAQSubmit
LIBRARIANS
TestimonialsSubscriptionsAccessResourcesLibrary Advisory BoardFAQ
RESEARCH
JoVE JournalMethods CollectionsJoVE Encyclopedia of ExperimentsArchive
EDUCATION
JoVE CoreJoVE BusinessJoVE Science EducationJoVE Lab ManualFaculty Resource CenterFaculty Site
Terms & Conditions of Use
Privacy Policy
Policies

Related Concept Videos

Master Transcription Regulators02:23

Master Transcription Regulators

6.9K
Master transcription regulators are regulatory proteins that are predominantly responsible for regulating the expression of multiple genes. Often these genes work in concert to drive a  complex process. Activation of a master transcription regulator can lead to a cascade of transcriptional activation necessary for that outcome. These regulators can directly bind to the regulatory sequences of the various genes involved, or they can indirectly regulate transcription by binding to regulatory...
6.9K
Language Development01:22

Language Development

329
Children master language quickly and with relative ease, supported by both biological predisposition and reinforcement. B. F. Skinner (1957) proposed that language is learned through reinforcement, while Noam Chomsky (1965) argued that language acquisition mechanisms are biologically determined.
The critical period for language acquisition suggests that the ability to acquire language is at its peak early in life. As people age, this proficiency decreases. Language development begins very...
329
Regulation of Expression Occurs at Multiple Steps02:24

Regulation of Expression Occurs at Multiple Steps

3.0K
3.0K
Mechanistic Models: Compartment Models in Individual and Population Analysis01:23

Mechanistic Models: Compartment Models in Individual and Population Analysis

33
Mechanistic models are utilized in individual analysis using single-source data, but imperfections arise due to data collection errors, preventing perfect prediction of observed data. The mathematical equation involves known values (Xi), observed concentrations (Ci), measurement errors (εi), model parameters (ϕj), and the related function (ƒi) for i number of values. Different least-squares metrics quantify differences between predicted and observed values. The ordinary least...
33
  1. Home
  2. Research Domains
  3. Language, Communication And Culture
  4. Linguistics
  5. Computational Linguistics
  6. Harnessing Large Language Models' Zero-shot And Few-shot Learning Capabilities For Regulatory Research

Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research

Hamed Meshkin1, Joel Zirkle1, Ghazal Arabidarrehdor1

  • 1Division of Applied Regulatory Science, Office of Clinical Pharmacology, Office of Translational Sciences, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, WO Bldg 64, 10903 New Hampshire Ave, Silver Spring, MD 20993, United States.

Briefings in Bioinformatics
|August 23, 2024

Related Experiment Videos

Augmenting Large Language Models via Vector Embeddings to Improve Domain-Specific Responsiveness
03:14

Augmenting Large Language Models via Vector Embeddings to Improve Domain-Specific Responsiveness

Published on: December 6, 2024

525
Author Spotlight: Impact of Intergenic Interactions on Disease-Identifying Dark Biomarkers
03:37

Author Spotlight: Impact of Intergenic Interactions on Disease-Identifying Dark Biomarkers

Published on: March 1, 2024

663
Measuring Statistical Learning Across Modalities and Domains in School-Aged Children Via an Online Platform and Neuroimaging Techniques
08:05

Measuring Statistical Learning Across Modalities and Domains in School-Aged Children Via an Online Platform and Neuroimaging Techniques

Published on: June 30, 2020

7.5K

View abstract on PubMed

Summary
This summary is machine-generated.

Open-source large language models (LLMs) can be deployed locally for secure data processing. These models demonstrate strong performance in extracting clinical pharmacology information, even with minimal training data.

Area of Science:

  • Artificial Intelligence
  • Natural Language Processing
  • Computational Biology

Background:

  • Large language models (LLMs) offer advanced conversational capabilities but often require data transmission to external servers.
  • Online LLM use poses data privacy risks, especially for sensitive information.
  • Organizations prioritizing data protection, like regulatory agencies, need secure, local AI solutions.

Purpose of the Study:

  • To evaluate the feasibility of implementing open-source LLMs within a secure local network.
  • To assess LLM performance in extracting clinical pharmacology information from drug labels.
  • To determine the efficacy of LLMs for sensitive data processing in regulated environments.

Main Methods:

  • Implementation of various open-source LLMs within a regulatory agency's local network.
Keywords:
FDA labelsfew-shot learninglarge language modelspharmacokinetic drug–drug interactions and intrinsic factors

Related Experiment Videos

Augmenting Large Language Models via Vector Embeddings to Improve Domain-Specific Responsiveness
03:14

Augmenting Large Language Models via Vector Embeddings to Improve Domain-Specific Responsiveness

Published on: December 6, 2024

525
Author Spotlight: Impact of Intergenic Interactions on Disease-Identifying Dark Biomarkers
03:37

Author Spotlight: Impact of Intergenic Interactions on Disease-Identifying Dark Biomarkers

Published on: March 1, 2024

663
Measuring Statistical Learning Across Modalities and Domains in School-Aged Children Via an Online Platform and Neuroimaging Techniques
08:05

Measuring Statistical Learning Across Modalities and Domains in School-Aged Children Via an Online Platform and Neuroimaging Techniques

Published on: June 30, 2020

7.5K
  • Performance assessment using few-shot and zero-shot learning on specific NLP tasks.
  • Evaluation of a selected LLM for identifying drug exposure factors without fine-tuning.
  • Main Results:

    • Some open-source LLMs achieved performance comparable or superior to traditional models with minimal training.
    • A selected LLM accurately identified factors affecting drug exposure with 78.5% accuracy on a large dataset.
    • The study demonstrated successful local deployment for sensitive data analysis.

    Conclusions:

    • Open-source LLMs can be effectively implemented in secure local networks for sensitive data tasks.
    • LLMs offer a viable solution for natural language processing when extensive training data is unavailable.
    • This approach enhances data privacy and security for regulatory and other high-priority organizations.
    prompt engineering
    zero-shot learning