Posts

Showing posts from September, 2018

Representing knowledge as a partitioning on a single set of words

Imagine a tessellation made from equilateral triangle shaped tiles. We could add more and more tiles besides each other till we reach infinity. Now we would like to model the following properties of language using a tessellation. We want for different shapes of tiles to be fitted together in our tessellation, where different tiles contain different information in the form of sentences.This will be useful in a number of ways. The process of communication will be equivalent to selecting a particular tile from this space.So could the process of acquiring commands to give to an agent responsible for making actions. To select a tile we may choose to select an Nth term in this tessellation according to some sorting algorithm that sorts this space of tessellations. We will also depart from having these tiles tessellate a Euclidean space, but have it be a non-Euclidean manifold, in order for the shapes to fit as we would like. In our tessellation procedure, we may begin with one sentence and a

Iterative Addressing in a Virtual Memory for Conceptualisation

If we could build a lens , where we sample from spaces itteratively in order to generate a particular, we may have a model that simplifies the process. And aids in keeping the selection process a series of linear transformations. For example, if we would like to generate an entire video, we could train an algorithm , perhaps with a VAE to selct the priinciple components of videos and then sample from this distribution. It will be up to the system to learn these components on its own, but the axes of this distribution are not objects in their own right. A point on a manifold is selected that is defined by the information it contains, which is a complete instance of a video. The system proposed described separates features and creates a manifold for each. Each new manifold will be linked by a parameter for selecting from that space. For this paper I present these manifolds as euclidean spaces that are positioned within a cascade. What this means is that , during the process of generation

Idealistic Virtual Neural Network

Creating an Artificial Minds Eye

You can randomly set up a word vector for each word, but depending on chance some samples of random choices would work better than others, infact with the different input vectors , the loss landscape is altered...... as it is word vectors are not random....so we could imagine making them less random through the learning process by backpropagating a loss that takes in the inputs as variables, rather than the weights of the network...i believe that perhaps when we learn new information, the representations of the words (i.e. how they are understood) is altered fundamentally and the BNN then uses the exact same network to process these differing variables...so when we learn information relating "x is a y" the actual representation of the four words changes...along with a whole lot of other words (perhaps ALL words) in order to give you the disposition to talk about this relationship in different ways without having to train a new network. Eg. if "john is a star student"