Let's assume that the first ray is a, the second ray is b, .s is the ray's starting point (a 2D vector), and .d is the ray's direction (also a 2D vector), and × denotes a 2D cross product, defined as:
a × b = a.x * b.y - a.y * b.x
After factoring out the common terms, the solution to the intersection equations comes to:
d = b.s - a.s det = b.d × a.d u = d × b.d / det v = d × a.d / det
If (and only if) det == 0, the rays are parallel and there are zero (or there is an infinite number of) intersection points. The execution of this code would lead to a division by zero.
Otherwise, if both u and v are positive numbers, the rays have a unique intersection point and u is the distance between a.s and the intersection point, while v is the distance between b.s and the intersection point.
After de-vectorizing and inlining the cross product, this becomes:
dx = b.s.x - a.s.x dy = b.s.y - a.s.y det = b.d.x * a.d.y - b.d.y * a.d.x u = (dy * b.d.x - dx * b.d.y) / det v = (dy * a.d.x - dx * a.d.y) / det
Five subtractions, six multiplications and two divisions.
If you don't care about the intersection point or the distance to it, these two divisons can be replaced by num * denom < 0 or sign(num) != sign(denom), depending on what is more efficient on your target machine.
On the other hand, if you'd actually want to find the intersection point, you have to input one of these solutions to the respective ray equation.
p = a.s + a.d * u
...which should be aproximately equal (as far as the numerical precision allows) to...
p = b.s + b.d * v