Although the Google guys did a good job on the Android documentation, the explanation on how to write code that captures videos is somewhat short.
In this tutorial, we are going to write an activity that is able to preview, start and stop video capturing, and give more explanation on it than the basic documentation does.
We are going to do this for Android 2.1, but after that, we will discuss differences with 2.2.
Finally, we will be illustrating how the undocumented 2.1 non-public api on MediaRecorder can be called through reflection.
This article is aimed at Android developers.

Setting the permissions

Since we are going to use the camera, the following line will definitely need to be declared in our AndroidManifest file:

<uses-permission android:name="android.permission.CAMERA" />

If we dont specify this, we will get a “Permission Denied” exception as soon as we try to access the camera from our code.

It is good practice to tell the app what features of the camera we are going to use too:

<uses-feature android:name="android.hardware.camera"/>
<uses-feature:name="android.hardware.camera.autofocus"/>

If we dont specify these however, the app will just assume that all camera features are used(camera, autofocus and flash). So to just make it work, we dont need to declare these.

We are also going to record audio during the video capture. So we also declare:

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Setting up the camera preview

Before we are going to discuss the actual video capturing, we are going to make sure that everything the camera is seeing is previewed to the screen.

Surfaceview is a special type of view that basically gives you a surface to draw too. Its used in various scenarios, such as to draw 2D or 3D objects to, or to play videos.

In this case, we are going to draw the camera input to such a a surfaceview, so the user is able to preview the video/sees what he is recording.

We define a camera_surface.xml layout file in which we setup the surfaceview:

<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
	android:layout_width="fill_parent" android:layout_height="fill_parent">
	<SurfaceView android:id="@+id/surface_camera" xmlns:android="http://schemas.android.com/apk/res/android"
		android:layout_width="fill_parent"
		android:layout_height="fill_parent"
		android:layout_centerInParent="true"
		android:layout_weight="1">
	</SurfaceView>
</RelativeLayout>

The following activity will then use the surfaceview in the above layout xml and start rendering the camera input to the screen:

public class CustomVideoCamera extends Activity implements SurfaceHolder.Callback{

	private static final String TAG = "CAMERA_TUTORIAL";

	private SurfaceView surfaceView;
	private SurfaceHolder surfaceHolder;
	private Camera camera;
	private boolean previewRunning;

        @Override
        public void onCreate(Bundle savedInstanceState) {
                super.onCreate(savedInstanceState);
                setContentView(R.layout.camera_surface);
                surfaceView = (SurfaceView) findViewById(R.id.surface_camera);
                surfaceHolder = surfaceView.getHolder();
                surfaceHolder.addCallback(this);
                surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
        }

        @Override
	public void surfaceCreated(SurfaceHolder holder) {
		camera = Camera.open();
		if (camera != null){
			Camera.Parameters params = camera.getParameters();
			camera.setParameters(params);
		}
		else {
			Toast.makeText(getApplicationContext(), "Camera not available!", Toast.LENGTH_LONG).show();
			finish();
		}
	}

	@Override
	public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
		if (previewRunning){
			camera.stopPreview();
		}
		Camera.Parameters p = camera.getParameters();
		p.setPreviewSize(width, height);
		p.setPreviewFormat(PixelFormat.JPEG);
		camera.setParameters(p);

		try {
			camera.setPreviewDisplay(holder);
			camera.startPreview();
			previewRunning = true;
		}
		catch (IOException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		}
	}

	@Override
	public void surfaceDestroyed(SurfaceHolder holder) {
		camera.stopPreview();
		previewRunning = false;
		camera.release();
	}
}

The main thing we are doing here is implementing a SurfaceHolder.Callback. This callback enables us to intervene when our surface is created, changed(format or size changes) or destroyed. Without this callback, our screen would just remain black.
After the surface is created, we obviously want to display what the camera is seeing. First, we are getting a reference to the camera by calling the static method Camera.open(). We only need to do this once, so we put this in the surfaceCreated method.
The actual start of the preview happens in the surfaceChanged method. This is because this method will not only be called right after surface creation(the first “change”), but also everytime something essential to the surface changes, and we want to stop previewing then, change some parameters and restart the preview. For example, we are using the passed width and height to set the preview size. By putting all of this in the surfaceChanged method, we are making sure our preview always remains consistent with our surface.
When the surface is destroyed(this happens for example at onPause or onDestroy of the activity), we are releasing the camera again, because otherwise other apps, like the native camera app, will start giving “Camera already in use” exceptions.

On a final note,

surfaceHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

means the surface is not going to own its buffers, and this surface type is typically used for camera stuff.

Note: Instead of making the activity implement the surface callback, you could also make a class that extends SurfaceView, make that one implement the Callback and use that subclass in the layout xml instead of the SurfaceView. If your activity is getting very long in terms of code, this might be a good thing to do.

Capturing the video

We are now adding the following method to our activity, which will be called when the user decides to start recording:

        private MediaRecorder mediaRecorder;
	private final int maxDurationInMs = 20000;
	private final long maxFileSizeInBytes = 500000;
	private final int videoFramesPerSecond = 20;

	public boolean startRecording(){
		try {
			camera.unlock();

			mediaRecorder = new MediaRecorder();

			mediaRecorder.setCamera(camera);
			mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
			mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);

			mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.DEFAULT);

			mediaRecorder.setMaxDuration(maxDurationInMs);

			tempFile = new File(getCacheDir(),cacheFileName);
			mediaRecorder.setOutputFile(tempFile.getPath());

			mediaRecorder.setVideoFrameRate(videoFramesPerSecond);
			mediaRecorder.setVideoSize(surfaceView.getWidth(), surfaceView.getHeight());

			mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.DEFAULT);
			mediaRecorder.setVideoEncoder(MediaRecorder.VideoEncoder.DEFAULT);

			mediaRecorder.setPreviewDisplay(surfaceHolder.getSurface());

			mediaRecorder.setMaxFileSize(maxFileSizeInBytes);

                        mediaRecorder.prepare();
			mediaRecorder.start();

			return true;
		} catch (IllegalStateException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
			return false;
		} catch (IOException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
			return false;
		}
	}

In this method we are preparing the MediaRecorder with all the necessary details.

First, we unlock the camera so we can pass it in a usable state to another process, in this case the recording process. We are doing this in the third line of the code.

Then we are setting all the properties of the MediaRecorder.
Two things are important here.
The order in which the methods are called is the first one. For example, we need to set the sources before setting the encoders and we have to set encoders before calling prepare.
The second important and less documented one, is that ALL properties have to be set. Prepare is a very sensitive and obscure method. The implementation is a native function that just returns an error code in case something goes wrong. So, for example, if you forget to set “maxDuration” on the above mediaRecorder, you will get some obscure “prepare failed” error on most devices, which will not give you any hint at all you didnt set the maxDuration property. Many people assume that these properties are not required at all, and are getting these hard to debug errors.

After preparing the recorder, we start the actual recording.

Stop recording

Then we stop recording in the following method:

public void stopRecording(){
	mediaRecorder.stop();
	camera.lock();
}

which speaks for itself.

Note: To finish the activity, our methods still need to be linked to button actions. We are leaving this to the reader. The easiest way is probably to add stop and start buttons to the layout xml file with the surfaceview, whose onClick attribute points to some action on the activity that calls respectively the startRecording and stopRecording method.

Android 2.1 vs 2.2

At the time of this writing, most Android devices are still running on 2.1 and most developers are aiming their apps to be compatible with 2.1 and above. Which makes sense, if one looks at some Android platform distribution information.

The reference documentation is already updated for 2.2 though.

If we take a look again at the official instructions again,

we notice that we have gone through all the steps mentioned there. We clarified some steps, like “passing a fully initialized SurfaceHolder” and we also took care of the “see Media recorder information” part.

But some things we did different too. We are looking at the 2.2 instructions, and some methods are not yet available in 2.1.
In general, the camera API has been changing/improving at lightning speed. The downside to this is that old apis are getting deprecated very fast and that you cant just use the latest api, since you would seriously hurt your potential number of customers on the market.

Portrait orientation

In 2.2, the setDisplayOrientation method is there, but it isnt in 2.1. Actually, portrait mode for capturing videos through the api is only supported since 2.2, as clearly stated in the New Developer APIs paragraph of Android 2.2 highlights.
So, for our activity, it is necessary to specify

android:screenOrientation="landscape"

Otherwise, it is likely that the camera will have a 90 degrees discrepancy with what the user is seeing(which can be changed by setting the rotation parameter on the camera, but hacking into the code to make the camera work with portrait mode on 2.1 is outside the scope of this tutorial).

Reconnect

Another method that is not there yet in 2.1, so we are obviously not calling it.

PixelFormat.JPEG

This constant, which we are using in our activity above, is already deprecrated in 2.2. But since ImageFormat.JPEG, the suggested replacement, is not there yet in 2.1, we are forced to use the deprecated api.

Calling the undocumented setParameters method on MediaRecorder

In 2.2, there are setters for the properties videoBitrate, audioBitrate, audioChannels and audioSamplingRate on MediaRecorder.
In 2.1, these properties cant be set officially.

If we take a look at the VideoCamera implementation at the android source code, in the 2.1 tree, we find code like:

mMediaRecorder.setParameters(String.format("video-param-encoding-bitrate=%d", mProfile.mVideoBitrate));
mMediaRecorder.setParameters(String.format("audio-param-encoding-bitrate=%d", mProfile.mAudioBitrate));
mMediaRecorder.setParameters(String.format("audio-param-number-of-channels=%d", mProfile.mAudioChannels));
mMediaRecorder.setParameters(String.format("audio-param-sampling-rate=%d", mProfile.mAudioSamplingRate));

Unfortunately, although it is present on all 2.1 devices as far as I know, the setParameters method is not part of the public API, so 2.1 developers are left in the cold there.
Luckily, there is a workaround.

When preparing the MediaRecorder you can add the following lines:

Method[] methods = mediaRecorder.getClass().getMethods();
for (Method method: methods){
	if (method.getName().equals("setParameters")){
		try {
			method.invoke(mediaRecorder, String.format("video-param-encoding-bitrate=%d", 360000));
			method.invoke(mediaRecorder, String.format("audio-param-encoding-bitrate=%d", 23450));
			method.invoke(mediaRecorder, String.format("audio-param-number-of-channels=%d", 1));
			method.invoke(mediaRecorder, String.format("audio-param-sampling-rate=%d",8000));
		} catch (IllegalArgumentException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		} catch (IllegalAccessException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		} catch (InvocationTargetException e) {
			Log.e(TAG,e.getMessage());
			e.printStackTrace();
		}
	}
}

Through reflection, we are iterating over the available methods on the MediaRecorder. If we find the setParameters method, we invoke the found method for the same effect as in the camera app for the android 2.1 source code.