Degree Level

A Comparative Study of Non Linear Conjugate Gradient Methods (open access)

A Comparative Study of Non Linear Conjugate Gradient Methods

We study the development of nonlinear conjugate gradient methods, Fletcher Reeves (FR) and Polak Ribiere (PR). FR extends the linear conjugate gradient method to nonlinear functions by incorporating two changes, for the step length αk a line search is performed and replacing the residual, rk (rk=b-Axk) by the gradient of the nonlinear objective function. The PR method is equivalent to FR method for exact line searches and when the underlying quadratic function is strongly convex. The PR method is basically a variant of FR and primarily differs from it in the choice of the parameter βk. On applying the nonlinear Rosenbrock function to the MATLAB code for the FR and the PR algorithms we observe that the performance of PR method (k=29) is far better than the FR method (k=42). But, we observe that when the MATLAB codes are applied to general nonlinear functions, specifically functions whose minimum is a large negative number not close to zero and the iterates too are large values far off from zero the PR algorithm does not perform well. This problem with the PR method persists even if we run the PR algorithm for more iterations or with an initial guess closer to the …
Date: August 2013
Creator: Pathak, Subrat
System: The UNT Digital Library
An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure (open access)

An Exploration of the Word2vec Algorithm: Creating a Vector Representation of a Language Vocabulary that Encodes Meaning and Usage Patterns in the Vector Space Structure

This thesis is an exloration and exposition of a highly efficient shallow neural network algorithm called word2vec, which was developed by T. Mikolov et al. in order to create vector representations of a language vocabulary such that information about the meaning and usage of the vocabulary words is encoded in the vector space structure. Chapter 1 introduces natural language processing, vector representations of language vocabularies, and the word2vec algorithm. Chapter 2 reviews the basic mathematical theory of deterministic convex optimization. Chapter 3 provides background on some concepts from computer science that are used in the word2vec algorithm: Huffman trees, neural networks, and binary cross-entropy. Chapter 4 provides a detailed discussion of the word2vec algorithm itself and includes a discussion of continuous bag of words, skip-gram, hierarchical softmax, and negative sampling. Finally, Chapter 5 explores some applications of vector representations: word categorization, analogy completion, and language translation assistance.
Date: May 2016
Creator: Le, Thu Anh
System: The UNT Digital Library