Shotgun proteomics refers to the use of bottom-up proteomics techniques in identifying proteins in complex mixtures using a combination of high performance liquid chromatography combined with mass spectrometry.[1][2][3][4][5][6] The name is derived from shotgun sequencing of DNA which is itself named after the rapidly expanding, quasi-random firing pattern of a shotgun. The most common method of shotgun proteomics starts with the proteins in the mixture being digested and the resulting peptides are separated by liquid chromatography. Tandem mass spectrometry is then used to identify the peptides. Targeted proteomics using SRM and data-independent acquisition methods are often considered alternatives to shotgun proteomics in the field of bottom-up proteomics. While shotgun proteomics uses data-dependent selection of precursor ions to generate fragment ion scans, the aforementioned methods use a deterministic method for acquisition of fragment ion scans.
Shotgun proteomics arose from the difficulties of using previous technologies to separate complex mixtures. In 1975, two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) was described by O’Farrell and Klose with the ability to resolve complex protein mixtures.[7][8] The development of matrix-assisted laser desorption ionization (MALDI), electrospray ionization (ESI), and database searching continued to grow the field of proteomics. However these methods still had difficulty identifying and separating low-abundance proteins, aberrant proteins, and membrane proteins. Shotgun proteomics emerged as a method that could resolve even these proteins.[5]
Shotgun proteomics allows global protein identification as well as the ability to systematically profile dynamic proteomes.[9] It also avoids the modest separation efficiency and poor mass spectral sensitivity associated with intact protein analysis.[1]
The dynamic exclusion filtering that is often used in shotgun proteomics maximizes the number of identified proteins at the expense of random sampling.[10] This problem may be exacerbated by the undersampling inherent in shotgun proteomics.[11]
Cells containing the protein complement desired are grown. Proteins are then extracted from the mixture and digested with a protease to produce a peptide mixture.[9] The peptide mixture is then loaded directly onto a microcapillary column and the peptides are separated by hydrophobicity and charge. As the peptides elute from the column, they are ionized and separated by m/z in the first stage of tandem mass spectrometry. The selected ions undergo collision-induced dissociation or other process to induce fragmentation. The charged fragments are separated in the second stage of tandem mass spectrometry.
The "fingerprint" of each peptide's fragmentation mass spectrum is used to identify the protein from which they derive by searching against a sequence database with commercially available software (e.g. Sequest or Mascot).[9] Examples of sequence databases are the Genpept database or the PIR database.[12] After the database search, each peptide-spectrum match (PSM) needs to be evaluated for validity.[13] This analysis allows researchers to profile various biological systems.[9]
Peptides that are degenerate (shared by two or more proteins in the database) makes it difficult to unambiguously identify the protein to which they belong. Additionally, some proteome samples of vertebrates have a large number of paralogs, and alternative splicing in higher eukaryotes can result in many identical protein subsequences.[1] Moreover, many proteins are naturally (co- or post-translational) or artificially (sample preparation artefacts) modified. This further challenges the identification of the peptide sequence by means of conventional database matching approaches. Together with peptide fragmentation spectra of poor quality or high complexity (due to co-isolation or sensitivity limitations), this leaves in a conventional shotgun proteomics experiment many sequencing spectra unidentified.[14][15][16][17]
With the human genome sequenced, the next step is the verification and functional annotation of all predicted genes and their protein products.[4] Shotgun proteomics can be used for functional classification or comparative analysis of these protein products. It can be used in projects ranging from large-scale whole proteome to focusing on a single protein family. It can be done in research labs or commercially.
One example of this is a study by Washburn, Wolters, and Yates in which they used shotgun proteomics on the proteome of a Saccharomyces cerevisiae strain grown to mid-log phase. They were able to detect and identify 1,484 proteins as well as identify proteins rarely seen in proteome analysis, including low-abundance proteins like transcription factors and protein kinases. They were also able to identify 131 proteins with three or more predicted transmembrane domains.[2]
Vaisar et al. uses shotgun proteomics to implicate protease inhibition and complement activation in the antiinflammatory properties of high-density lipoprotein.[18] In a study by Lee et al., higher expression level of hnRNP A2/B1 and Hsp90 were observed in human hepatoma HepG2 cells than in wild type cells. This led to a search for reported functional roles mediated in concert by both these multifunctional cellular chaperones.[19]
Original source: https://en.wikipedia.org/wiki/Shotgun proteomics.
Read more |