filmov
tv
How to Normalize cDNA Concentrations -- Ask TaqMan® Ep. 15 by Life Technologies

Показать описание
In this video, Sr. Field Applications Specialist Doug Rains examines why the choice of an appropriate normalizer gene is so critical for the accuracy of final real-time PCR data. In addition to discussing this gene's importance, Doug provides a helpful reference document which explains in detail how researchers can quantitatively validate their choice of a control gene.
Relative gene expression is one of the most common applications that researchers perform on their real-time instruments. So it's never a surprise when I receive a question like the one I got recently from Shan at Pennsylvania State University, who asks the following:, "Since every cDNA is run with both the gene of interest and internal control, do I have to be sure that the concentration of each cDNA is the same? Excellent question. Let's start with a little bit of review.
Whenever we do a gene expression experiment, we need at least two gene-specific assays one for our gene of interest, and one for an internal control gene, also sometimes referred to as a "normalizer," "housekeeping gene," or "endogenous control." Different terms, same idea.
So what does this second gene -- the normalizer -- do for us? Essentially, its number one job is to normalize our final data for differing input amounts of template. Here's what I mean.
Say I'm comparing the expression of my gene of interest in two samples: an untreated and a treated cell line. When I examine the real-time results, I find that my two samples differ by a single Ct. This result suggests that there's a two-fold difference in my target's expression between the two samples. But how do I know that two-field difference is real?
After all, isn't it possible that my untreated and treated cDNAs had different concentrations, and that's the reason we're seeing a one-Ct difference? Definitely a possibility. Precisely why I also have to run a normalizer gene on each sample.
A good normalizer gene is one whose expression is stable across my various sample types, assuming equal starting amounts. Thanks to the data I collect from this second gene assay, I can effectively monitor input amounts of cDNA, even when they differ from sample to sample. Then, at the end of the run, I can use Ct data from the normalizer gene to correct final expression values for differing input amounts of cDNA. I can thereby avoid having to always add equal amounts of template when running my experiments.
But clearly, for the normalizer to do its job correctly, its expression has to be stable across our sample set. If it's not, the data from the normalizer can actually make our final expression values less accurate. So how does one choose a good normalizer?
To find out, I strongly recommend our kind viewers visit the Life Technologies web page, where you can download a copy of the Relative Gene Expression Workflow document. This PDF has an entire section on choosing the most appropriate normalizer, based not on guesswork, but on good, hard empirical data.
Relative gene expression is one of the most common applications that researchers perform on their real-time instruments. So it's never a surprise when I receive a question like the one I got recently from Shan at Pennsylvania State University, who asks the following:, "Since every cDNA is run with both the gene of interest and internal control, do I have to be sure that the concentration of each cDNA is the same? Excellent question. Let's start with a little bit of review.
Whenever we do a gene expression experiment, we need at least two gene-specific assays one for our gene of interest, and one for an internal control gene, also sometimes referred to as a "normalizer," "housekeeping gene," or "endogenous control." Different terms, same idea.
So what does this second gene -- the normalizer -- do for us? Essentially, its number one job is to normalize our final data for differing input amounts of template. Here's what I mean.
Say I'm comparing the expression of my gene of interest in two samples: an untreated and a treated cell line. When I examine the real-time results, I find that my two samples differ by a single Ct. This result suggests that there's a two-fold difference in my target's expression between the two samples. But how do I know that two-field difference is real?
After all, isn't it possible that my untreated and treated cDNAs had different concentrations, and that's the reason we're seeing a one-Ct difference? Definitely a possibility. Precisely why I also have to run a normalizer gene on each sample.
A good normalizer gene is one whose expression is stable across my various sample types, assuming equal starting amounts. Thanks to the data I collect from this second gene assay, I can effectively monitor input amounts of cDNA, even when they differ from sample to sample. Then, at the end of the run, I can use Ct data from the normalizer gene to correct final expression values for differing input amounts of cDNA. I can thereby avoid having to always add equal amounts of template when running my experiments.
But clearly, for the normalizer to do its job correctly, its expression has to be stable across our sample set. If it's not, the data from the normalizer can actually make our final expression values less accurate. So how does one choose a good normalizer?
To find out, I strongly recommend our kind viewers visit the Life Technologies web page, where you can download a copy of the Relative Gene Expression Workflow document. This PDF has an entire section on choosing the most appropriate normalizer, based not on guesswork, but on good, hard empirical data.
Комментарии