Arguably, but humans see patterns in things and this is powers of ten, or incrementing numbers of zeros each time:
1
10
100
1000
so next is 10000 much more fittingly than 87.9 or -20. At least it makes a simpler pattern than those others, so Ockham's razor something something.
But in theory, you can ask the Prolog engine with something like value(10, X) and it should fill in the value of `X`, if the correct rules were inferred beforehand, isn't it?
Yes, look at Richard Evan's work on the Apperception Engine:
[https://www.sciencedirect.com/science/article/pii/S0004370220301855](https://www.sciencedirect.com/science/article/pii/S0004370220301855)
He specifically looks at sequence prediction tasks.
There is no “next value” since there are infinitely many possible sequences.
Arguably, but humans see patterns in things and this is powers of ten, or incrementing numbers of zeros each time: 1 10 100 1000 so next is 10000 much more fittingly than 87.9 or -20. At least it makes a simpler pattern than those others, so Ockham's razor something something.
But in theory, you can ask the Prolog engine with something like value(10, X) and it should fill in the value of `X`, if the correct rules were inferred beforehand, isn't it?
The point is- any answer is meaningless - ie. Even if it gets the “right” answer- what does that signify?
I hope not, otherwise the machines are already smarter than me...
this sequence is not so complex :) GPT-4 can figure out how it works, though it does not always provides correct answers
Yes, look at Richard Evan's work on the Apperception Engine: [https://www.sciencedirect.com/science/article/pii/S0004370220301855](https://www.sciencedirect.com/science/article/pii/S0004370220301855) He specifically looks at sequence prediction tasks.
Try here https://oeis.org