I think the most important point is that the way you phrased your question is misleading.
If you're thinking about a sequence like you would find in a logic test, then I would suggest the answer is probably 6, for the various reasons that have already been given by others.
If you're just talking about a function that would fit those points, then Christopher Kennedy and Joel Frankford's answers here are on point ; and this explains why different people gave valid answers to the sequence : given a finite subset of points, you can actually create an infinite number of functions that will go through those points (provided of course, the coordinates on axis are all different)
Perhaps more interesting than the answer itself, here's how it can be done :
Let [math]\left( x_{1},...,x_{n} \right)[/math] be a subset of distinct real numbers and [math]\left( a_{1},...,a_{n} \right)[/math] be a subset of real numbers.
You want to create a function f such that [math]\forall k \in \left \{ 1;n \right \} , f(x_k)=a_k[/math]
It is really easy to do on a theoretical standpoint using Lagrange interpolation's polynoms. However, before I expose it, I think it is important to understand how they actually work.
You have to create one function that goes through all the points you selected. The remarkable intuition that Lagrange had, is that this is pretty easily done if you can create not one, but many polynoms, that will yield the neutral elements of the most basic internal composition laws (+ and * , hence 0 and 1) in the specific points you required, and then multiply and sum them as you want.
For now we will consider n=2
That is a really jargon-y way to say that if we have a polynom called [math]L_1[/math] and a polynom called [math]L_2[/math] such as :
[math]L_1(x_1)=1[/math] and [math]L_1(x_2)=0[/math] ;
[math]L_2(x_1)=0[/math] and [math]L_2(x_2)=1[/math]
then it follows pretty quickly that [math]P = a_1*L_1+a_2*L_2 [/math] is a function that satisfies our requirements since [math]P(x_1) = a_1*L_1(x_1)+a_2*L_2(x_1)=a_1*1+a_2*0=a_1 [/math] and the same reasoning yields [math]P(x_2) = a_2 [/math]
The question remaining now is, how can we actually obtain those polynoms ?
Here's how it works to create [math]L_1[/math] :
First, we want [math]L_1(x_2)=0[/math] , so we should probably start creating our polynom using [math]\left( X-x_2 \right)[/math] , that way we have a 0 when evaluated in [math]x_2[/math]
Then we need to make this 1 when evaluated in [math]x_1[/math]. How about we divide by [math]\left( x_1-x_2 \right)[/math]. That way our polynom [math]L_1[/math] becomes [math]\frac{\left( X-x_2 \right)}{\left( x_1-x_2 \right)}[/math] and satisfies both our conditions.
With the same reasoning, we create [math]L_2(X)=)\frac{\left( X-x_1 \right)}{\left( x_2-x_1 \right)}[/math]
And now we have our function P that goes through our points !
Back to general case with [math]n\in \mathbb{N}^*[/math]
Following the same intuition, we can introduce the following subset of polynoms :
[math]\forall k\in \left \{ 1;n \right \} , L_k(X)=\frac{\prod_{i\neq k}^{}{\left( X-x_{i} \right)}}{\prod_{i\neq k}^{}{\left( x_{k}-x_{i} \right)}}[/math]
which all have the same properties as before, namely :
[math]L_k(x_k)=1 [/math] and [math]\forall i\neq k\in \left \{ 1;n \right \}, L_k(x_i)=0[/math].
It then follows that if we let [math]P = \sum_{k=1}^{n}{a_k\cdot L_k}[/math] ; then we will have :
[math]\forall i\in \left \{ 1;n \right \}, P(x_i) = \sum_{k=1}^{n}{a_k\cdot L_k(x_i)}=a_1*L_1(x_i)+(...)+a_i*L_i(x_i)+(...)+a_n*L_n(x_i)=a_1*0+(...)+a_i*1+(...)+a_n*0=a_i[/math] , which is exactly what we were looking for.
Now we have it, we have created a function (a polynom in this case) that goes through every point we wanted.
How can we go from this one function to an infinite number of functions ?
What other people here were trying to explain, is that there is not just one function that satisfies those conditions, which explains that we cannot for sure know what f(3) equals.
Here's how I would explain it :
It is really easy, as we have seen, to create many functions that yield 0 when evaluated in different axis coordinates.
In our case, I can think of course of [math]f_{1}\left( x \right)=\prod_{i=1}^{n}{\left( x-x_{i} \right)}[/math] , which is the trick we used to create the Lagrange polynoms, but also [math]f_j(x)=\prod_{i=1}^{n}{\left( x-x_{i} \right)}^j[/math] where j is an non-nul integer. Note that there is an infinite number of [math]f_j[/math] functions.
What's interesting is that for any given j ; [math]P+f_j[/math] is a function that goes exactly through the points we wanted.
So now we have the infinite set of functions [math]\left( P+f_{j} \right)_{j\in \mathbb{N}^* }[/math] that all satisfy the requirements, and it is highly probable that if we apply that to the case you suggest - [math](a_1,a_2,a_3,a_4)= (20,30,42,56)[/math] and [math](x_1,x_2,x_3,x_4)= (5,6,7,8)[/math] ; all those functions would give very different results when evaluated in 3.
NB : The functions created with polynoms are obviously not ALL the functions that satisfy said requirements, one can also think of more fancy stuff like [math]e^{\prod_{i=1}^{n}{\left( x-x_{i} \right)}}\cdot P[/math] and things really more complex, but you got the point.