This depends on the kind of projection. In raytracing the common kinds perspective and orthogonal projection are possible, but also less common ones like oblique projections, and even exotic things like panoramic projections. Furthermore is it possible to generate the rays in view space and to transform it into world space, or else to generate them in world space directly; the former one is usually easier. You further need to define along which of the 6 principle axis directions you want to be the forward looking direction. Oh, and over-sampling and jitter plays a role, too (forgotten these in my first write). Without defining such things, the possibilities are too many for a detailed description here.
In general you have 2 things to consider: The eye point, if the projection uses one, and the sample point on the view plane. From these you can define the both parts required for a ray r, an "origin" r0 and a direction rd:
r( t ) := r0 + t * rd
The eye point (used by e.g. perspective projection) defines a point trough which each ray passes, and as such gives a fine origin r0 for the rays. In view space the eye point is (0,0,0), while in world space the eye point is defined as the positional part of the observer's world transform.
Parallel projections have rays that do not pass through a common point, but they make a defined angle with the view plane. The orthogonal projection, for example, makes a 90° angle with the view plane (in other words, the direction rd is equal to the normal (or perhaps its inverse) of the view plane at the position of the sample).
EDIT: ((Forgotten to tell about sampling)) The view is usually rectangular with a a resolution of (Vx, Vy) many pixels. If the view plane is also rectangular, then is has a size of say (w,h). (If the view plane is not rectangular, then you need to map it to a rectangle.) Without over-sampling or jittering, each ray passes through a center of a rectangular portion of the view plane. There are (Vx, Vy) many such portions, each one being w/Vx by h/Vy in size. To find the centers, you need to add half of that size to the sample positions. Hence the sample point in local view plane co-ordinates are at
( -w/2 + w/2/Vx + i * w/Vx , -h/2 + h/2/Vy + j * h/Vy ) for 0 <= i < Vx and 0 <= j < Vy
assuming a symmetric viewport.