Posts

Showing posts from May, 2018

ABSTRACT submited

BACKPROPAGATION ON A NOVEL NETWORK (NON ANN) FOR HATESPEECH DETECTION AND MORE This paper introduces a novel type of network that can accurately classify "hate speech" and differentiate it from normal content. It develops a network centered around a unique POS (part of speech) tagging type of system whose specifics are learnt by the network. Each word is assigned variables that indicate the polarity, (sign) of the effect a particular word has on every other word in a sentence, the magnitude of that polarity and also to what extent the word filters itself from the effect from other words. When choosing a set of properties, we would like to properly understand what we want the network to do. We know that both posts classified as hate speech and those not contain samples of words from a common pool (they share words). But we still want that the results for opposite classes to polarize. On top of this I believe that this represents a complex system, in that small chang...
This involves a "hate speech" or "spam" detection algorithm. (depending on the training data). This will be based on ANN theory , and involve the backpropagation algorithm while not based on an actual neural network. The training data will be labeled into two classes, "hate speech" and "non hate speech".   Hate speech will have an expected value of 1 , representing certain hate speech, while non hate speech will have an expected value of -1.   So what we would like is for the software to be input a post and gives it a value of 1 or -1, or something much closer to one or the other. During training The program will compute a value by performing the equivalent of forward propagation through the parameters we assign to each of multiple structures we will embed in the input ...then using this value and the epected value we will compute a cost function. We will then backpropagate through the heirachical structure of parameters in the direc...

Backpropogation with global minimum gradient descent

If a particular posterior distribution is a surface in a high dimensional space , we may ask what the dimensions of this space represent. From my understanding , In a CNN the dimensions could represent the representation encoded at each neuron in the NN as well as the filters..Where the HD surface then represents how each neurons "bias" contributes (amplitude in that dimension) to the bias of the network to classify say a cat as a cat? And if there are 4 neurons one for  the legs , the head , the body and the tail then a random image must have a 4D normal distribution (surface) in this space.to classify a cat..ofcourse neurons never learn that well....because when the thing that causes a neuron to fire may be part of a leg and part of a head, with the appropiate settings even this N can identify cats , but the distribution will not be normal...if this reasoning is fine so far the real question is, is it possible to give conditions to the back propogation algorithm such that r...

Applied learning optimisation

Imagine a particular posterior distribution is represented as a two dimensional surface with ridges and contours. During training different aspects of this distribution become apparent to the agent. Say at time t=n the amount of the distribution revealed is S(n). we could imagine a small change in learning to be dS/dt to reveal a bit more information about the distribution. Now in order for something to exist , wether information or matter, it requires a form..and form is a  function of contrast. Or rather A is = A because A!= !A (the axiom of identity. a thing is what it is because it can be contrasted with what its not. So any feature of the posterior distribution must exist because it contrasts with another part of the posterior distribution. So if we are developing a ridge there must be parts that are lower than others and parts that are higher than others. When we observe dS/dt we reveal a little more of where this change is headed...which means we can optimise the learning pr...

Fractals for NLP

If you have a hyper cellular automaton, it has some interesting properties. every path outward from a node will return multiple times to the same node. Since this is true this pattern can be described as fractal in nature since it is self containing at lower depths. Also since this is true of all nodes the cellular automaton becomes highly fractal in nature, where every fractal is contained within every other fractal. If we were to illustrate this another way , we would have as many 2 dimensional shapes as there are nodes. Since a node is made of other nodes it means we could align these different shapes in such a way without gaps in-between them that they form the outline or shape of any of the individual pieces , just larger. Similarly you could analyze each off its parts and align them with smaller versions of the overlying shapes within it. This goes down forever and up forever with the number of nodes in the cellular automatons graph being the degrees of freedom. In choos...

Logistic Regression for NLP-brief

What we can do is to place the words in English in some high dimensional space. Then when we enter in the following in the system, "if i wake up late , i will miss the bus. and i wake up late" Each of the words in that statement will have a polynomial associated with it, and when these polynomials combine to make the sentence they form what would otherwise be a decision boundary that contains the words "i miss the bus" which is the modus ponens conclusion of the above statement. Because the words exist in a high dimensional space it is easier to group words together as during training we will use gradient decent to move the words and adjust the polynomials so that they align. Once they do align , we can test the system with an arbitrary premise , and get a solution.

Center of Gravity algorithm

The recent progress in AI has been phenomenal. This has been made possible by the advent of faster processing units and better algorithms. Despite this rise in the state of AI, there is a lament that most of the work has been to apply the new advances to a narrow scope and the original intention of the “founding fathers”, the creation of a machine that has human level abilities in processing information, has not attracted much in the way of attention. This paper aims to propose a path that could lead to the creation of a so call AGI. An artificially intelligent machine whose intelligence equals that of the average human in all areas that humans display intelligence. I will focus on the field of Natural Language Processing, as i believe that that is where most of the work is needed. The problem with the traditional approaches to AI is that inference is largely stochastic, while in humans this may also be true, but this stochasticness is constrained by rules such as can be found in th...

Intelligence of cellular automata

https://www.youtube.com/watch?v=1obZKnrFnXY Please have a look at the above video...it is necessary to do so so you understand the following text. On that video we had three boxes. One where the premise was placed, one which was the mapping procedure for modus ponens, and one which happened to show the conclusion. This time around the premise box will be a database. so we will feed the sequence of words in a text into the grid with different sentences having different colors and different words in each sentence having different shades of that color.....note that there will not be enough colors for a typical text so we will have to find a way of representing our own version of color model that is not based on the real one. as you will see this frees us up in more ways than one. This system has two layers of abstraction. firstly, we can talk of the lower layer where words are literally adjacent to other words. we will change which particular ones during training. And then on an...

Pattern Recognition

There is a certain way that the human brain identifies patterns. If we look specifically at music we notice one thing. The degree of utility or pleasure the human brain derives depends fully on its complexity. I will qualify that later. What complexity means is really contrast. If you have low frequency parts, you need parts that also contain mid frequency and high in the mix. If you have a lot of discrete note based parts , you will need a lead or string to balance it. Balancing is adjusting contrast. If you have a part where most of the notes go up then you need to follow it with a part where most of the notes go down. If you have a part with quick changes you need to follow it with a part with slow changes. Now they are exceptions, but the exceptions follow the same rule. You CAN have lots of non-contrast in a part of that song, as long as some other part of the song strongly contrasts with that one...i.e. a contrast between parts that internally are not contrasting in some respe...

Alternate World Algorithm

 If we had the time and patience to teach an agent grounded knowledge , knowledge from experience and historical context of concepts, we would have more success in giving it a model of language to work with. We could create a hypothetical world like our own as a simulation, and in this environment it learns how to use language as tools to achieve goals within it. E.g. there would be characters that teach the agent how to relate to object s by naming them and by the agent observing how our language and activities we do when using it correlate.  So one character would be seen to pick up a cup when another character says "please pick up that cup" then it correlates the two and realizes what those words mean because of all the correlations it observes. There are problems with this , namely that the characters training the agent are agents in their own right and need that training first also. i propose another solution. We create a hypothetical world that is nothing like o...