Fractals for NLP

If you have a hyper cellular automaton, it has some interesting properties. every path outward from a node will return multiple times to the same node. Since this is true this pattern can be described as fractal in nature since it is self containing at lower depths. Also since this is true of all nodes the cellular automaton becomes highly fractal in nature, where every fractal is contained within every other fractal. If we were to illustrate this another way , we would have as many 2 dimensional shapes as there are nodes. Since a node is made of other nodes it means we could align these different shapes in such a way without gaps in-between them that they form the outline or shape of any of the individual pieces , just larger. Similarly you could analyze each off its parts and align them with smaller versions of the overlying shapes within it. This goes down forever and up forever with the number of nodes in the cellular automatons graph being the degrees of freedom.
In choosing the particular shapes for each of the nodes you need to know how the automaton is wired to see how each node is related to the other. some nodes may be adjacent to two other nodes while other nodes may be adjacent to 10 or even more...so if the number of adjacency's is going to be different per node, then the particular 2D shapes formed will have different shapes.
So we need to first come up with the adjacency list,, then we can produce a structure with particular shapes that satisfy this at the same time. Now in order to do that we need to focus on a use for the words. We may want to also place these shapes in such a way that they correlate with a particular sentence. That means we only use some of the shapes, (though within each exist the rest), we need to constrain the shapes of the  nodes (and hence adjacency's) to fit only if they make a legitimate sentence. Then  legitimate propositional logic premises can be the only type of sentences that cause the shapes to fit with out any gaps. Once we have fed in a collection of statements , it is up to the computer to figure the correct adjacency's that would produce sentences that exhibit a nature similar to other propositional logic statements in that there are no gaps when you join them together, and also fulfill the requirement that each shape of a node can be made from the shapes of the other nodes.

So when the computer gets words it tries to see how it can arrange the shapes in such a way that there are no gaps left in the larger shape created by them, it will then know what type of logic it uses. Note that each logical type of sentence. now has a symbolic shape to it.

CASE A
Then it will send the sentence to the correct cellular automaton tree (with different sets of edges) Where the shape can be rearranged only one other way that will have no gaps and that way will be possible to be made only by combining the shapes correlating with the conclusion.
CASE B
Each of the nodes shapes, i.e.in the arrangement of 2d shapes without gaps form yet another cellular automaton which will be simpler and depending on the shape ,a token will travel down all the shapes in this sentences shape and deeper into each of the shapes sub shapes to collect a series of words..corresponding to perhaps every fifth node(for example) visited till a period is encounters or some other rule to traverse this structure.

This is very much in its infancy, i am exploring multiple avenues simultaneously. 

Comments

Popular posts from this blog

ABSTRACT submited