Abstract: | In the absence of entry barrier or regulatory restrictions, Non Banking Financial Companies frantically grew and accessed the public deposit without any regulatory control. The deposit of NBFCs grew from Rs. 41.9 crore in 1971 to 53116.0 crore in 1997. This growth was the result of a combined effect of increase in the number of NBFCs and increase in the amount of deposits. The deposits amazed as above was invested in various assets especially that in motor vehicles by these asset financing NBFCs. Various tactics were adopted by these NBFCs and their agents for recovering the receivable outstanding from such assets. Both central government and RBI were concerned about the protection of depositors‘ interest and various committees were set up to frame a comprehensive regulation for the functioning of these NBFCs. |
Description: | School of Management Studies, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3466 |
Files | Size |
---|---|
Dyuthi-T1441.pdf | (4.415Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/5279 |
Files | Size |
---|---|
Dyuthi T-2315.pdf | (2.698Mb) |
Abstract: | Microarray data analysis is one of data mining tool which is used to extract meaningful information hidden in biological data. One of the major focuses on microarray data analysis is the reconstruction of gene regulatory network that may be used to provide a broader understanding on the functioning of complex cellular systems. Since cancer is a genetic disease arising from the abnormal gene function, the identification of cancerous genes and the regulatory pathways they control will provide a better platform for understanding the tumor formation and development. The major focus of this thesis is to understand the regulation of genes responsible for the development of cancer, particularly colorectal cancer by analyzing the microarray expression data. In this thesis, four computational algorithms namely fuzzy logic algorithm, modified genetic algorithm, dynamic neural fuzzy network and Takagi Sugeno Kang-type recurrent neural fuzzy network are used to extract cancer specific gene regulatory network from plasma RNA dataset of colorectal cancer patients. Plasma RNA is highly attractive for cancer analysis since it requires a collection of small amount of blood and it can be obtained at any time in repetitive fashion allowing the analysis of disease progression and treatment response. |
Description: | Department of Computer Science, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3760 |
Files | Size |
---|---|
Dyuthi-T1722.pdf | (5.663Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/1341 |
Files | Size |
---|---|
Chandrasekharan Pillai N 1984.PDF | (196.7Kb) |
Abstract: | Public undertakings have been assigned a significant role to play in the systematic socio-economic development of India. My interest in the subject was kindled while I was doing my Masters Diploma in Public Administration at the Indian Institute of Public Administration, New Delhi during 1960-61. It was further strengthened by my teaching of the subject in different courses offered by me at the School of Management Studies and in several programmes organised by various voluntary and training organisations like the Institute of Management in Government, Trivandrum, Centre for Management Development, Trivandrum, etc. The several years in which I served as a member of the faculty in the School of Management Studies, University of Cochin,gave me the opportunity to come into close contact with different public sector concerns and their managers at various levels. This rich opportunity gave me a better insight into the problems faced by these concerns. The present study is a result of the interest so developed. |
Description: | School of management studies, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3572 |
Files | Size |
---|---|
Dyuthi-T1526.pdf | (16.57Mb) |
Abstract: | In this thesis, the applications of the recurrence quantification analysis in metal cutting operation in a lathe, with specific objective to detect tool wear and chatter, are presented.This study is based on the discovery that process dynamics in a lathe is low dimensional chaotic. It implies that the machine dynamics is controllable using principles of chaos theory. This understanding is to revolutionize the feature extraction methodologies used in condition monitoring systems as conventional linear methods or models are incapable of capturing the critical and strange behaviors associated with the metal cutting process.As sensor based approaches provide an automated and cost effective way to monitor and control, an efficient feature extraction methodology based on nonlinear time series analysis is much more demanding. The task here is more complex when the information has to be deduced solely from sensor signals since traditional methods do not address the issue of how to treat noise present in real-world processes and its non-stationarity. In an effort to get over these two issues to the maximum possible, this thesis adopts the recurrence quantification analysis methodology in the study since this feature extraction technique is found to be robust against noise and stationarity in the signals.The work consists of two different sets of experiments in a lathe; set-I and set-2. The experiment, set-I, study the influence of tool wear on the RQA variables whereas the set-2 is carried out to identify the sensitive RQA variables to machine tool chatter followed by its validation in actual cutting. To obtain the bounds of the spectrum of the significant RQA variable values, in set-i, a fresh tool and a worn tool are used for cutting. The first part of the set-2 experiments uses a stepped shaft in order to create chatter at a known location. And the second part uses a conical section having a uniform taper along the axis for creating chatter to onset at some distance from the smaller end by gradually increasing the depth of cut while keeping the spindle speed and feed rate constant.The study concludes by revealing the dependence of certain RQA variables; percent determinism, percent recurrence and entropy, to tool wear and chatter unambiguously. The performances of the results establish this methodology to be viable for detection of tool wear and chatter in metal cutting operation in a lathe. The key reason is that the dynamics of the system under study have been nonlinear and the recurrence quantification analysis can characterize them adequately.This work establishes that principles and practice of machining can be considerably benefited and advanced from using nonlinear dynamics and chaos theory. |
Description: | Division of Mechanical Engineering,CUSAT |
URI: | http://dyuthi.cusat.ac.in/purl/2812 |
Files | Size |
---|---|
Dyuthi-T0833.pdf | (3.327Mb) |
Abstract: | The study shows that standard plastics like polypropylene and high density polyethylene can be reinforced by adding nylon short fibres. Compared to the conventional glass reinforced thermoplastics this novel class of reinforced thermoplastics has the major advantage of recyclability. Hence such composites represent a new spectrum of recyclable polymer composites. The fibre length and fibre diameter used for reinforcement are critical parameters While there is a critical fibre length below which no effective reinforcement takes place, the reinforcement improves when the fibre diameter decreases due to increased surface area.While the fibres alone give moderate reinforcement, chemical modification of the matrix can further improve the strength and modulus of the composites. Maleic anhydride grafting in presence of styrene was found to be the most efficient chemical modification. While the fibre addition enhances the viscosity of the melt at lower shear rates, the enhancement at higher shear rate is only marginal. This shows that processing of the composite can be done in a similar way to that of the matrix polymer in high shear operations such as injection moulding. Another significant observation is the decrease in melt viscosity of the composite upon grafting. Thus chemical modification of matrix makes processing of the composite easier in addition to improving the mechanical load bearing capacity.For the development of a useful short fibre composite, selection of proper materials, optimum design with regard to the particular product and choosing proper processing parameters are most essential. Since there is a co-influence of many parameters, analytical solutions are difficult. Hence for selecting proper processing parameters 'rnold flow' software was utilized. The orientation of the fibres, mechanical properties, temperature profile, shrinkage, fill time etc. were determined using the software.Another interesting feature of the nylon fibre/PP and nylon fibre/HDPE composites is their thermal behaviour. Both nylon and PP degrade at the same temperature in single steps and hence the thermal degradation behaviour of the composites is also being predictable. It is observed that the thermal behaviour of the matrix or reinforcement does not affect each other. Almost similar behaviour is observed in the case of nylon fibre/HDPE composites. Another equally significant factor is the nucleating effect of nylon fibre when the composite melt cools down. In the presence of the fibre the onset of crystallization occurs at slightly higher temperature.When the matrix is modified by grafting, the onset of crystallization occurs at still higher temperature. Hence it may be calculated that one reason for the improvement in mechanical behaviour of the composite is the difference in crystallization behaviour of the matrix in presence of the fibre.As mentioned earlier, a major advantage of these composites is their recyclability. Two basic approaches may be employed for recycling namely, low temperature recycling and high temperature recycling. In the low temperature recycling, the recycling is done at a temperature above the melting point of the matrix, but below that of the fibres while in the high temperature route. the recycling is done at a temperature above the melting points of both matrix and fibre. The former is particularly interesting in that the recycled material has equal or even better mechanical properties compared to the initial product. This is possible because the orientation of the fibre can improve with successive recycling. Hence such recycled composites can be used for the same applications for which the original composite was developed. In high temperature recycling, the composite is converted into a blend and hence the properties will be inferior to that of the original composite, but will be higher than that of the matrix material alone. |
Description: | Department of Polymer Science and Rubber Technology, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/2207 |
Files | Size |
---|---|
Dyuthi-T0562.pdf | (6.248Mb) |
Abstract: | The- classic: experiment of Heinrich Hertz verified the theoretical predict him of Maxwell that kxnfli radio and light waves are physical phenomena governed by the same physical laws. This has started a.rnnJ era of interest in interaction of electromagnetic energy with matter. The scattering of electromagnetic waves from a target is cleverly utilized im1 RADAR. This electronic system used tx> detect and locate objects under unfavourable conditions or obscuration that would render the unaided eye useless. It also provides a means for measuring precisely the range, or distance of an object and the speed of a moving object. when an obstacle is illuminated by electromagnetic waves, energy is dispersed in all directions. The dispersed energy depends on the size, shape and composition of the obstacle and frequency and nature of the incident wave. This distribution of energy’ is known as ‘scattering’ and the obstacle as ‘scatterer’ or 'target'. |
Description: | Department of Electronics, Cochin University of Science And Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3723 |
Files | Size |
---|---|
Dyuthi-T1687.pdf | (5.067Mb) |
Abstract: | Information and communication technologies are the tools that underpin the emerging “Knowledge Society”. Exchange of information or knowledge between people and through networks of people has always taken place. But the ICT has radically changed the magnitude of this exchange, and thus factors such as timeliness of information and information dissemination patterns have become more important than ever.Since information and knowledge are so vital for the all round human development, libraries and institutions that manage these resources are indeed invaluable. So, the Library and Information Centres have a key role in the acquisition, processing, preservation and dissemination of information and knowledge. ln the modern context, library is providing service based on different types of documents such as manuscripts, printed, digital, etc. At the same time, acquisition, access, process, service etc. of these resources have become complicated now than ever before. The lCT made instrumental to extend libraries beyond the physical walls of a building and providing assistance in navigating and analyzing tremendous amounts of knowledge with a variety of digital tools. Thus, modern libraries are increasingly being re-defined as places to get unrestricted access to information in many formats and from many sources.The research was conducted in the university libraries in Kerala State, India. lt was identified that even though the information resources are flooding world over and several technologies have emerged to manage the situation for providing effective services to its clientele, most of the university libraries in Kerala were unable to exploit these technologies at maximum level. Though the libraries have automated many of their functions, wide gap prevails between the possible services and provided services. There are many good examples world over in the application of lCTs in libraries for the maximization of services and many such libraries have adopted the principles of reengineering and re-defining as a management strategy. Hence this study was targeted to look into how effectively adopted the modern lCTs in our libraries for maximizing the efficiency of operations and services and whether the principles of re-engineering and- redefining can be applied towards this.Data‘ was collected from library users, viz; student as well as faculty users; library ,professionals and university librarians, using structured questionnaires. This has been .supplemented by-observation of working of the libraries, discussions and interviews with the different types of users and staff, review of literature, etc. Personal observation of the organization set up, management practices, functions, facilities, resources, utilization of information resources and facilities by the users, etc. of the university libraries in Kerala have been made. Statistical techniques like percentage, mean, weighted mean, standard deviation, correlation, trend analysis, etc. have been used to analyse data.All the libraries could exploit only a very few possibilities of modern lCTs and hence they could not achieve effective Universal Bibliographic Control and desired efficiency and effectiveness in services. Because of this, the users as well as professionals are dissatisfied. Functional effectiveness in acquisition, access and process of information resources in various formats, development and maintenance of OPAC and WebOPAC, digital document delivery to remote users, Web based clearing of library counter services and resources, development of full-text databases, digital libraries and institutional repositories, consortia based operations for e-journals and databases, user education and information literacy, professional development with stress on lCTs, network administration and website maintenance, marketing of information, etc. are major areas need special attention to improve the situation. Finance, knowledge level on ICTs among library staff, professional dynamism and leadership, vision and support of the administrators and policy makers, prevailing educational set up and social environment in the state, etc. are some of the major hurdles in reaping the maximum possibilities of lCTs by the university libraries in Kerala. The principles of Business Process Re-engineering are found suitable to effectively apply to re-structure and redefine the operations and service system of the libraries. Most of the conventional departments or divisions prevailing in the university libraries were functioning as watertight compartments and their existing management system was more rigid to adopt the principles of change management. Hence, a thorough re-structuring of the divisions was indicated. Consortia based activities and pooling and sharing of information resources was advocated to meet the varied needs of the users in the main campuses and off campuses of the universities, affiliated colleges and remote stations. A uniform staff policy similar to that prevailing in CSIR, DRDO, ISRO, etc. has been proposed by the study not only in the university libraries in kerala but for the entire country.Restructuring of Lis education,integrated and Planned development of school,college,research and public library systems,etc.were also justified for reaping maximum benefits of the modern ICTs. |
Description: | Department of Computer Applications, Cochin Univesity of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/2888 |
Files | Size |
---|---|
Dyuthi-T0885.pdf | (22.03Mb) |
Abstract: | NABARD has completed 14 years of operation.ln the light of its experiences and achievements, the performance ev.ilu.ilion of the National Bank need to be looked into.This could provide certain criteria for its strength and weakness which may help in consolidating the institution for better utilisation of its potentialities. It is also noteworthy that no evaluative study on the National Bank has been conducted in Kerala. The Major objective of this study is to evaluate the role of NABARD in catering to the long-term agricultural requirements of Kerala for 1982 to 1992.This is done by analysing the quantum and quality of NABARD's schematic refinance. The qualitative indices like (1) the efficiency of loan recovery, (2) the impact or financial viability of NABARD refinanced schemes, (3) the credit gap, (4) the commitment-disbursement gap, and (5) the imbalances in the NABARD refinance form the core of the study.Hypotheses were formulated inorder to study and analyse these qualitative indices. The study is presented in eight chapters |
Description: | Department of applied economics, Cochin University of Science And Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3453 |
Files | Size |
---|---|
Dyuthi-T1465.pdf | (7.142Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/1663 |
Files | Size |
---|---|
Dyuthi-T0148.pdf | (2.294Mb) |
Abstract: | The study makes an attempt to examine the inter regional variations in Kerala in economic development with respect to the important indicators of development over the period 1971 to 2001. The study takes districts as the unit of analysis because this is an attempt to find out the status of districts in Kerala.The study proved that there exists inter district disparities in economic development measured in terms of different indices used for analysis.. statistical estimation of variation proves that there is high degree of variation in industrial sector followed by social and economic infrastructure. The composite index of industrial development shows that the highest index is 1.395 which is five times greater than that of the lowest index 0.273. More or less the same pattern of differences are noticed in most of the indicators of the development. A ranking of the district on the basis of the overall development indicators shows that Malappuram is the least developed district in Kerala. In case of almost all indicators of development Malappuram is lagging behind all other districts. |
URI: | http://dyuthi.cusat.ac.in/purl/54 |
Files | Size |
---|---|
Dyuthi-T0121.pdf | (8.327Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/5592 |
Files | Size |
---|---|
Dyuthi T-2633.pdf | (7.389Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/5223 |
Files | Size |
---|---|
Dyuthi T-2258.pdf | (2.864Mb) |
Abstract: | Multivariate lifetime data arise in various forms including recurrent event data when individuals are followed to observe the sequence of occurrences of a certain type of event; correlated lifetime when an individual is followed for the occurrence of two or more types of events, or when distinct individuals have dependent event times. In most studies there are covariates such as treatments, group indicators, individual characteristics, or environmental conditions, whose relationship to lifetime is of interest. This leads to a consideration of regression models.The well known Cox proportional hazards model and its variations, using the marginal hazard functions employed for the analysis of multivariate survival data in literature are not sufficient to explain the complete dependence structure of pair of lifetimes on the covariate vector. Motivated by this, in Chapter 2, we introduced a bivariate proportional hazards model using vector hazard function of Johnson and Kotz (1975), in which the covariates under study have different effect on two components of the vector hazard function. The proposed model is useful in real life situations to study the dependence structure of pair of lifetimes on the covariate vector . The well known partial likelihood approach is used for the estimation of parameter vectors. We then introduced a bivariate proportional hazards model for gap times of recurrent events in Chapter 3. The model incorporates both marginal and joint dependence of the distribution of gap times on the covariate vector . In many fields of application, mean residual life function is considered superior concept than the hazard function. Motivated by this, in Chapter 4, we considered a new semi-parametric model, bivariate proportional mean residual life time model, to assess the relationship between mean residual life and covariates for gap time of recurrent events. The counting process approach is used for the inference procedures of the gap time of recurrent events. In many survival studies, the distribution of lifetime may depend on the distribution of censoring time. In Chapter 5, we introduced a proportional hazards model for duration times and developed inference procedures under dependent (informative) censoring. In Chapter 6, we introduced a bivariate proportional hazards model for competing risks data under right censoring. The asymptotic properties of the estimators of the parameters of different models developed in previous chapters, were studied. The proposed models were applied to various real life situations. |
Description: | Department of Statistics, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/2708 |
Files | Size |
---|---|
Dyuthi-T0758.pdf | (5.959Mb) |
Abstract: | One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved. |
Description: | School of Engineering, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/2817 |
Files | Size |
---|---|
Dyuthi-T0837.pdf | (6.227Mb) |
URI: | http://dyuthi.cusat.ac.in/purl/5350 |
Files | Size |
---|---|
Dyuthi T-2407.pdf | (6.653Mb) |
Abstract: | Reliability analysis is a well established branch of statistics that deals with the statistical study of different aspects of lifetimes of a system of components. As we pointed out earlier that major part of the theory and applications in connection with reliability analysis were discussed based on the measures in terms of distribution function. In the beginning chapters of the thesis, we have described some attractive features of quantile functions and the relevance of its use in reliability analysis. Motivated by the works of Parzen (1979), Freimer et al. (1988) and Gilchrist (2000), who indicated the scope of quantile functions in reliability analysis and as a follow up of the systematic study in this connection by Nair and Sankaran (2009), in the present work we tried to extend their ideas to develop necessary theoretical framework for lifetime data analysis. In Chapter 1, we have given the relevance and scope of the study and a brief outline of the work we have carried out. Chapter 2 of this thesis is devoted to the presentation of various concepts and their brief reviews, which were useful for the discussions in the subsequent chapters .In the introduction of Chapter 4, we have pointed out the role of ageing concepts in reliability analysis and in identifying life distributions .In Chapter 6, we have studied the first two L-moments of residual life and their relevance in various applications of reliability analysis. We have shown that the first L-moment of residual function is equivalent to the vitality function, which have been widely discussed in the literature .In Chapter 7, we have defined percentile residual life in reversed time (RPRL) and derived its relationship with reversed hazard rate (RHR). We have discussed the characterization problem of RPRL and demonstrated with an example that the RPRL for given does not determine the distribution uniquely |
Description: | Department of Statistics, Cochin University of Science and Technology |
URI: | http://dyuthi.cusat.ac.in/purl/3157 |
Files | Size |
---|---|
Dyuthi-T1131.pdf | (3.496Mb) |
Abstract: | Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed. |
URI: | http://dyuthi.cusat.ac.in/purl/4965 |
Files | Size |
---|---|
Dyuthi-T2041.pdf | (3.721Mb) |
Dyuthi Digital Repository Copyright © 2007-2011 Cochin University of Science and Technology. Items in Dyuthi are protected by copyright, with all rights reserved, unless otherwise indicated.