Literature review

Confirmation Report >> Literature Review

In a Confirmation Report, the Literature Review contextualises existing research relevant to the research topic to provide a background and rationale for the proposed study. It also evaluates key trends in recent research to highlight their strengths and weaknesses and identify the need for further research, which the proposed study aims to address.

The Literature Review, analyses and evaluates existing research and creates the theoretical foundation to position the proposed study. It fulfils multiple communicative purposes, which are to:

  • To identify the central focus and the main research themes of the proposed research
  • To critically evaluate and integrate theoretical perspectives and research findings relevant to the research topic
  • To establish the context for the research gap and identify it
  • To justify the approach to the research and show how it will address its objectives.

The following extracts from the Literature Review section of five confirmation reports in Engineering have been annotated to illustrate how these communicative purposes are realised. 

Extract A
Function: To identify the central focus and the main research themes of the proposed research

Protein subcellular localization Note 1 is one of the most essential and indispensable topics in proteomics research. Recent years have witnessed the incredibly fast development of molecular biology and computer science, which makes it possible to utilize computational methods to determine the subcellular locations of proteins. This chapter introduces the background knowledge about proteins Note 2, their subcellular locations Note 3 as well as subcellular localization prediction Note 4 Different conventional methods for subcellular localization prediction are introduced, and  finally our proposed methods are outlined.

Adapted from: S. Wan. “Protein Subcellular Localization Prediction Based on Gene Ontology and SVM”, PhD confirmation report, Dept. of Elec. and Inf. Eng., POLYU, Hong Kong, 2011.

Extract B
Function: To critically evaluate and integrate theoretical perspectives and research findings relevant to the research topic.

Advanced oxidation processes (AOPs) involving the generation of hydroxyl radicals (OH∙) as the primary oxidant have been shown to be successful in degrading refractory organic contaminants in waters and wastewaters.  Among various AOPs, high-frequency ultrasound (US) has attracted considerable interest in recent years by virtue of its particular comparative advantages such as the avoidance of chemical dosing and catalysts, safety, a lower demand for solution clarity, etc. Note 1 […].

Although US can achieve the degradation of refractory compounds, one of its shortcomings is its relatively low efficiency, mostly due to the inevitable recombination of generated radicals (ca. 80%) to form more stable molecules (H2O2, H2O, etc), which reduces the effective contact between radicals and target contaminants. In order to counter these effects and enhance the oxidation performance by US, its combination with other AOP technologies […] has been tested in an attempt to show either an additive or a synergistic benefit Note 2  The hybrid technique of combining ultraviolet (UV) irradiation and US has been found to be beneficial in enhancing the degradation of target compounds but the majority of previous studies have been conducted under photocatalyst mediated conditions [4-6], which has the disadvantage of incurring the additional costs of the catalysts and their final disposal Note 3.  The combination of catalyst-free UV and US (henceforth US/UV), however, has the advantage of Note 4 […]. There is a need for more detailed information concerning the exact role of H2Oin the treatment reactions and a mechanistic model to describe the US/UV process; these are addressed in this study Note 5.

Adapted from: L. Xu. “Degradation of Refractory Contaminants in Water by Chemical –Free Radicals Generated by Ultrasound and UV Irradiation”, PhD confirmation report, Dept. of Civil and Env. Eng., POLYU, Hong Kong, 2014.

Extract C
To establish the context for the research gap and identify it. 

Among all the methods mentioned above, composition-based methods are easy to implement and have obvious biological reasoning; but in most cases these methods perform poorly, which demonstrates that amino acid sequence information is not sufficient for protein sub-cellular localization. Besides, sorting-signal based methods […] However, this type of methods could only deal with proteins that contain signal sequences. For example, the popular Target P [13], [25], could only detect three locations: chloroplast, mitochondria and secretory pathway (extracellular). Homology-based methods, on the other hand, theoretically can detect as many locations as appeared in the training data and can achieve comparatively high accuracy [26]. Note 1 However, when the training data contains sequences with low sequence similarity or the numbers of samples in different classes are imbalanced, the performance is still very poor. While the functional-domain based methods can often outperform sequence-based methods (as they can leverage the annotations in functional domain databases), they can only be applied to datasets where the sequences possess the required information as so far not all sequences are functionally annotated. Thus, they must be complemented by other types of methods Note 2.

Adapted from: S. Wan. “Protein Subcellular Localization Prediction Based on Gene Ontology and SVM”, PhD confirmation report, Dept. of Elec. and Inf. Eng., POLYU, Hong Kong, 2011.

Extract D
Function: To justify the approach to the research and show how it will address its objectives.

The aim of the present research is to evaluate the compressive strength and hot working characteristics of the TX32 alloys with a view to understand the effect of the combined additions of Al and Si to TX alloys. Note 1 For this purpose, the compressive strength is measured in the temperature range 25-250 o C and the hot working deformation behaviour is evaluated in the temperature range 300-500 o C by using compression Tests Note 2.

Adapted from: D. Chalasani. “Microstructure and Texture Evolution During Hot Working of Mg-3Sn-2Ca (TX32) Alloys With Micro Additions of ‘Al’ and ‘Si’”. PhD confirmation report, Dept. of Manuf. Eng. and Eng. Mgmt., CITYU, Hong Kong, 2011.

Evaluation and integration of theoretical perspectives and research findings:
Key steps
The evaluation of existing research is a crucial step in a Literature Review, as it allows the author to develop a stance, or an argument that supports their approach to the proposed research. The argument is developed as the author summarises, synthesises, compares and critiques existing research studies. A Literature Review that scopes (i.e. determines what research to include and the extent to which it should be covered) and then incorporates these strategies is more effective compared with one that merely refers to existing research that has been randomly selected

  • Summarising involves the extraction of important findings from a research paper.
  • Synthesising refers to the sorting and organisation of important findings selected from existing research and integrating the information into conceptual categories or themes, reflecting the author’s knowledge of the theoretical issues relevant to the topic. This process also involves comparison of perspectives from existing research to highlight similarities and differences.
  • Critiquing research involves identifying the limitations or significance of research selected for review and including the author’s original perspectives based on their understanding of research reviewed.

Note:
In critically evaluating previous research, it is more acceptable to criticise and summarise methods than the researchers whose work is reviewed.

The following extracts from a confirmation report in Computer Engineering are used to illustrate these four strategies.
Click on the numbers to see their annotated explanations.

Extract A: Summary and synthesis

Kupiec et al., (1995) present a medium between the shallow-feature extraction (Edmundson, 1969) and the modern statistical and corpus-based approaches. In the “hiatus” (Hovy, 2005: 583) of over 20 years in between, there were a number of cognitively grounded summarizing systems or models, such as FRUMP (Dejong, 1982) and SCISOR (Jacobs and Rau, 1990) Note 1 They all take semantic representation as input and incorporate complicated knowledge processing, which makes them markedly different from today’s summarizing systems taking text as input and utilizing models and algorithms from AI and NLP Note 2.A more detailed introduction of such efforts can be found in (Endres-Niggemeyer, 1998: 310–330). Ushered in by Kupiec et al., (1995), the age of text summarization has arrived, with upgraded technology (machine learning, statistical, corpus-based, etc.), sharpened tools (lexical cohesion, discourse structure, graph model, etc.), and extended coverage (from single document summarization to multi-document and query-focused summarization) Note 3

Adapted from: R. Zhang. “Coherence-Based Text Summarisation”, PhD confirmation report, Dept. of Computing, POLYU, Hong Kong, 2010.

Extract B: Comparison and critique

Many summarizing models and systems are traditionally oriented to generic summaries, which do not address a particular US need. On the other hand, query-focused summaries, which are produced in response to a user need or query and related closely to question-answering systems and information extraction techniques (Jurafsky and Martin, 2009: 836-838), have attracted sustained research interest in the past decade Note 1

A pioneering work, (Baldwin and Morton, 1998), addresses an obvious difficulty caused by query — co-reference identification of the key terms in the query. Note 2 A query (or headline) term and its related terms form a co-reference chain, which is used to select sentences for summary. Mani and Bloedorn (1999) report a more complicated query-based MDS system that is built on a standard “analysis-refinement-synthesis” architecture. Note 3 In the analysis stage, documents are represented as graphs with words as nodes and word attributes and relations as edges. In the refinement stage, a spreading activation algorithm is used to reweight the nodes based on the user’s query. Then commonalities and differences between documents are represented as a matrix for sentence extraction.

Adapted from: R. Zhang. “Coherence-Based Text Summarisation”, PhD confirmation report, Dept. of Computing, POLYU, Hong Kong, 2010.

Activity 1

42,295 total views, 1 views today