Refine
Document Type
- Doctoral Thesis (4) (remove)
Institute
Keywords
- Augmented Reality Displays (1)
- Eye Typing (1)
- Eye tracking movement (1)
- FEM (1)
- Gaze Control (1)
- Gaze Interaction (1)
- HPC (1)
- Housing (1)
- Information Retrieval (1)
- Lagos (1)
Year of publication
- 2012 (4) (remove)
There is a continuous exacerbation of environmental problems in big cities of today’s world, thereby, diminishing the quality of life in them. Of particular concern is the fact that today’s megacities are evolving in the developing world without corresponding growth in the economy, infrastructure and other human development indices. As urban population continues to grow in these cities of the Global South, governing institutions are usually unable to keep pace with their social responsibilities, thus, making the issue of urban governance very critical. This is because effective and efficient urban governance is highly essential for the creation, strengthening and sustenance of governing institutions.
Lagos, a mega-city of over 15.45 million people and the most populous metropolitan area on the African continent epitomizes the fundamental grave characteristics of the emerging megacities of the Global South, thereby, constituting an apt choice in understanding the emerging megacities of the next generation. Two out of every three Lagos residents live in slums and de-humanizing physical and social conditions. Many of them sleep, work, eat and cook under highway bridges, at the mercy of weather elements.
This research, therefore, evaluated urban governance through housing administration in Africa’s largest megacity. It examines the extent of housing problems in the city, the causal factors and the culpability of government agencies statutorily responsible for the provision, control and management of housing development in Lagos - the tenth largest city in the world. A representative geographic part of the city which manifests classic characteristics of slum life, listed by Mike Davis as the largest slum in Africa and the 6th largest in the world – Ajegunle - was adopted for case study. The research design combined rigorous literature search (desk research) with quantitative and, especially, qualitative approaches to data collection. The qualitative approach was more intensely adopted because government officials often respond to enquiries with ‘official answers and data’ which may not be reliable and the study had to rely on keen observation of physical traces, social interaction and personal investigation. The cross-sectional research method was adopted. Information was solicited from house-owners, building industry professionals, sociologists and officials of relevant government agencies, through research tools like questionnaires, interviews, focused group discussions and personal observations.
The analysis and discussion of these field data, in conjunction with the information from the desk research gave a better understanding of the status-quo, which informed the recommendations proposed in the dissertation for mitigating the problems. The research discovered that many of the statutory housing agencies have the capacity to effectively discharge their responsibilities. However, it was also shown that corruption and abdication of responsibilities by the staff of these agencies constitute primary causes of the chasm between the anticipated lofty outcome from the laudable building regulations/bye-laws and the appalling reality. It also discovered that lack of political will and apathy on the part of successive Governments of Lagos State to the improvement of housing conditions of the poor masses are major causes of the housing debacle in Lagos.
Several germane and realistic recommendations for redressing the situation were subsequently proffered. These include amongst others, the conduction of an accurate census for Lagos, in conjunction with credible international agencies, as a requisite basis for effective planning of any sort. The process of obtaining legal titles for land should also be made less cumbersome, while the housing administration process should be computerized; in order to reduce inter-personal contacts between applicants and government officials to the barest minimum, as a means of curbing the wide spread corruption in the system.
Texts from the web can be reused individually or in large quantities. The former is called text reuse and the latter language reuse. We first present a comprehensive overview of the different ways in which text and language is reused today, and how exactly information retrieval technologies can be applied in this respect. The remainder of the thesis then deals with specific retrieval tasks. In general, our contributions consist of models and algorithms, their evaluation, and for that purpose, large-scale corpus construction.
The thesis divides into two parts. The first part introduces technologies for text reuse detection, and our contributions are as follows: (1) A unified view of projecting-based and embedding-based fingerprinting for near-duplicate detection and the first time evaluation of fingerprint algorithms on Wikipedia revision histories as a new, large-scale corpus of near-duplicates. (2) A new retrieval model for the quantification of cross-language text similarity, which gets by without parallel corpora. We have evaluated the model in comparison to other models on many different pairs of languages. (3) An evaluation framework for text reuse and particularly plagiarism detectors, which consists of tailored detection performance measures and a large-scale corpus of automatically generated and manually written plagiarism cases. The latter have been obtained via crowdsourcing. This framework has been successfully applied to evaluate many different state-of-the-art plagiarism detection approaches within three international evaluation competitions.
The second part introduces technologies that solve three retrieval tasks based on language reuse, and our contributions are as follows: (4) A new model for the comparison of textual and non-textual web items across media, which exploits web comments as a source of information about the topic of an item. In this connection, we identify web comments as a largely neglected information source and introduce the rationale of comment retrieval. (5) Two new algorithms for query segmentation, which exploit web n-grams and Wikipedia as a means of discerning the user intent of a keyword query. Moreover, we crowdsource a new corpus for the evaluation of query segmentation which surpasses existing corpora by two orders of magnitude. (6) A new writing assistance tool called Netspeak, which is a search engine for commonly used language. Netspeak indexes the web in the form of web n-grams as a source of writing examples and implements a wildcard query processor on top of it.
Modern digital material approaches for the visualization and simulation of heterogeneous materials allow to investigate the behavior of complex multiphase materials with their physical nonlinear material response at various scales. However, these computational techniques require extensive hardware resources with respect to computing power and main memory to solve numerically large-scale discretized models in 3D. Due to a very high number of degrees of freedom, which may rapidly be increased to the two-digit million range, the limited hardware ressources are to be utilized in a most efficient way to enable an execution of the numerical algorithms in minimal computation time. Hence, in the field of computational mechanics, various methods and algorithms can lead to an optimized runtime behavior of nonlinear simulation models, where several approaches are proposed and investigated in this thesis.
Today, the numerical simulation of damage effects in heterogeneous materials is performed by the adaption of multiscale methods. A consistent modeling in the three-dimensional space with an appropriate discretization resolution on each scale (based on a hierarchical or concurrent multiscale model), however, still contains computational challenges in respect to the convergence behavior, the scale transition or the solver performance of the weak coupled problems. The computational efficiency and the distribution among available hardware resources (often based on a parallel hardware architecture) can significantly be improved. In the past years, high-performance computing (HPC) and graphics processing unit (GPU) based computation techniques were established for the investigationof scientific objectives. Their application results in the modification of existing and the development of new computational methods for the numerical implementation, which enables to take advantage of massively clustered computer hardware resources. In the field of numerical simulation in material science, e.g. within the investigation of damage effects in multiphase composites, the suitability of such models is often restricted by the number of degrees of freedom (d.o.f.s) in the three-dimensional spatial discretization. This proves to be difficult for the type of implementation method used for the nonlinear simulation procedure and, simultaneously has a great influence on memory demand and computational time.
In this thesis, a hybrid discretization technique has been developed for the three-dimensional discretization of a three-phase material, which is respecting the numerical efficiency of nonlinear (damage) simulations of these materials. The increase of the computational efficiency is enabled by the improved scalability of the numerical algorithms. Consequently, substructuring methods for partitioning the hybrid mesh were implemented, tested and adapted to the HPC computing framework using several hundred CPU (central processing units) nodes for building the finite element assembly. A memory-efficient iterative and parallelized equation solver combined with a special preconditioning technique for solving the underlying equation system was modified and adapted to enable combined CPU and GPU based computations.
Hence, it is recommended by the author to apply the substructuring method for hybrid meshes, which respects different material phases and their mechanical behavior and which enables to split the structure in elastic and inelastic parts. However, the consideration of the nonlinear material behavior, specified for the corresponding phase, is limited to the inelastic domains only, and by that causes a decreased computing time for the nonlinear procedure. Due to the high numerical effort for such simulations, an alternative approach for the nonlinear finite element analysis, based on the sequential linear analysis, was implemented in respect to scalable HPC. The incremental-iterative procedure in finite element analysis (FEA) during the nonlinear step was then replaced by a sequence of linear FE analysis when damage in critical regions occured, known in literature as saw-tooth approach. As a result, qualitative (smeared) crack initiation in 3D multiphase specimens has efficiently been simulated.
Gaze based human-computer-interaction has been a research topic for over a quarter century. Since then, the main scenario for gaze interaction has been helping handicapped people to communicate an interact with their environment. With the rapid development of mobile and wearable display technologies, a new application field for gaze interaction has appeared, opening new research questions.
This thesis investigates the feasibility of mobile gaze based interaction, studying deeply the use of pie menus as a generic and robust widget for gaze interaction as well as visual and perceptual issues on head mounted (wearable) optical see-through displays.
It reviews conventional gaze-based selection methods and investigates in detail the use of pie menus for gaze control. It studies and discusses layout issues, selection methods and applications. Results show that pie menus can allocate up to six items in width and multiple depth layers, allowing a fast and accurate navigation through hierarchical levels by using or combining multiple selection methods. Based on these results, several text entry methods based on pie menus are proposed. Character-by-character text entry, text entry with bigrams and with text entry with bigrams derived by word prediction, as well as possible selection methods, are examined in a longitudinal study. Data showed large advantages of the bigram entry methods over single character text entry in speed and accuracy. Participants preferred the novel selection method based on saccades (selecting by borders) over the conventional and well established dwell time method.
On the one hand, pie menus showed to be a feasible and robust widget, which may enable the efficient use of mobile eye tracking systems that may not be accurate enough for controlling elements on conventional interface. On the other hand, visual perception on mobile displays technologies need to be examined in order to deduce if the mentioned results can be transported to mobile devices.
Optical see-through devices enable observers to see additional information embedded in real environments. There is already some evidence of increasing visual load on the respective systems. We investigated visual performance on participants with a visual search tasks and dual tasks presenting visual stimuli on the optical see-through device, only on a computer screen, and simultaneously on both devices. Results showed that switching between the presentation devices (i.e. perceiving information simultaneously from both devices) produced costs in visual performance. The implications of these costs and of further perceptual and technical factors for mobile gaze-based interaction are discussed and solutions are proposed.