我如何操作相机预览?

有几个教程在那里解释如何获得一个简单的相机预览和Android设备上运行。 但是我找不到任何解释如何在渲染之前处理图像的例子。
我想要做的是实现自定义的颜色filter来模拟例如红色和/或绿色的不足。

我在这方面做了一些研究,并把一个工作(ish)的例子放在一起。 这是我发现的。 从相机中取出原始数据非常容易。 它作为YUV字节数组返回。 您需要将其手动绘制到曲面上才能修改。 要做到这一点,你需要有一个你可以手动运行绘制调用的SurfaceView。 有几个标志可以设置完成。

为了手动执行绘制调用,您需要将字节数组转换为某种types的位图。 Bitmap和BitmapDecoder似乎不能很好地处理YUV字节数组。 有一个错误提出了这个问题,但不是那个地位是什么。 所以人们一直试图将字节数组解码成RGB格式。

看起来像手动解码一直有点慢,人们有不同程度的成功。 类似这样的东西应该可以在NDK级别用本机代码来完成。

不过,它有可能得到它的工作。 另外,我的小演示只是花费了几个小时在一起(我想这样做会让我的想象力有点过分)。 所以很可能有一些调整你可以大大改善我设法工作。

这个小代码片断包含了我发现的其他一些gem。 如果所有你想要的是能够在表面上绘制,你可以重写表面的onDraw函数 – 你可以分析返回的相机图像,并绘制一个覆盖 – 这将比尝试处理每一帧更快。 另外,如果您想要显示相机预览,我将更改SurfaceHolder.SURFACE_TYPE_NORMAL。 所以对代码的一些修改 – 注释掉的代码:

//try { mCamera.setPreviewDisplay(holder); } catch (IOException e) // { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); } 

而且:

 SurfaceHolder.SURFACE_TYPE_NORMAL //SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS - for preview to work 

应该允许您根据真实预览顶部的相机预览来重叠框架。

无论如何,这是一个工作的代码 – 应该给你一些开始。

只需将一行代码放在您的某个视图中,如下所示:

 <pathtocustomview.MySurfaceView android:id="@+id/surface_camera" android:layout_width="fill_parent" android:layout_height="10dip" android:layout_weight="1"> </pathtocustomview.MySurfaceView> 

并将这个类包含在你的源代码中:

 package pathtocustomview; import java.io.IOException; import java.nio.Buffer; import android.content.Context; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.graphics.Canvas; import android.graphics.Paint; import android.graphics.Rect; import android.hardware.Camera; import android.util.AttributeSet; import android.util.Log; import android.view.SurfaceHolder; import android.view.SurfaceHolder.Callback; import android.view.SurfaceView; public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback { private SurfaceHolder mHolder; private Camera mCamera; private boolean isPreviewRunning = false; private byte [] rgbbuffer = new byte[256 * 256]; private int [] rgbints = new int[256 * 256]; protected final Paint rectanglePaint = new Paint(); public MySurfaceView(Context context, AttributeSet attrs) { super(context, attrs); rectanglePaint.setARGB(100, 200, 0, 0); rectanglePaint.setStyle(Paint.Style.FILL); rectanglePaint.setStrokeWidth(2); mHolder = getHolder(); mHolder.addCallback(this); mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL); } @Override protected void onDraw(Canvas canvas) { canvas.drawRect(new Rect((int) Math.random() * 100, (int) Math.random() * 100, 200, 200), rectanglePaint); Log.w(this.getClass().getName(), "On Draw Called"); } public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } public void surfaceCreated(SurfaceHolder holder) { synchronized (this) { this.setWillNotDraw(false); // This allows us to make our own draw // calls to this canvas mCamera = Camera.open(); Camera.Parameters p = mCamera.getParameters(); p.setPreviewSize(240, 160); mCamera.setParameters(p); //try { mCamera.setPreviewDisplay(holder); } catch (IOException e) // { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); } mCamera.startPreview(); mCamera.setPreviewCallback(this); } } public void surfaceDestroyed(SurfaceHolder holder) { synchronized (this) { try { if (mCamera != null) { mCamera.stopPreview(); isPreviewRunning = false; mCamera.release(); } } catch (Exception e) { Log.e("Camera", e.getMessage()); } } } public void onPreviewFrame(byte[] data, Camera camera) { Log.d("Camera", "Got a camera frame"); Canvas c = null; if(mHolder == null){ return; } try { synchronized (mHolder) { c = mHolder.lockCanvas(null); // Do your drawing here // So this data value you're getting back is formatted in YUV format and you can't do much // with it until you convert it to rgb int bwCounter=0; int yuvsCounter=0; for (int y=0;y<160;y++) { System.arraycopy(data, yuvsCounter, rgbbuffer, bwCounter, 240); yuvsCounter=yuvsCounter+240; bwCounter=bwCounter+256; } for(int i = 0; i < rgbints.length; i++){ rgbints[i] = (int)rgbbuffer[i]; } //decodeYUV(rgbbuffer, data, 100, 100); c.drawBitmap(rgbints, 0, 256, 0, 0, 256, 256, false, new Paint()); Log.d("SOMETHING", "Got Bitmap"); } } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { mHolder.unlockCanvasAndPost(c); } } } } 

我用walta的解决scheme,但我有一些YUV转换,相机帧输出尺寸和相机释放崩溃的问题。

最后,下面的代码为我工作:

 public class MySurfaceView extends SurfaceView implements Callback, Camera.PreviewCallback { private static final String TAG = "MySurfaceView"; private int width; private int height; private SurfaceHolder mHolder; private Camera mCamera; private int[] rgbints; private boolean isPreviewRunning = false; private int mMultiplyColor; public MySurfaceView(Context context, AttributeSet attrs) { super(context, attrs); mHolder = getHolder(); mHolder.addCallback(this); mMultiplyColor = getResources().getColor(R.color.multiply_color); } // @Override // protected void onDraw(Canvas canvas) { // Log.w(this.getClass().getName(), "On Draw Called"); // } @Override public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) { } @Override public void surfaceCreated(SurfaceHolder holder) { synchronized (this) { if (isPreviewRunning) return; this.setWillNotDraw(false); // This allows us to make our own draw calls to this canvas mCamera = Camera.open(); isPreviewRunning = true; Camera.Parameters p = mCamera.getParameters(); Size size = p.getPreviewSize(); width = size.width; height = size.height; p.setPreviewFormat(ImageFormat.NV21); showSupportedCameraFormats(p); mCamera.setParameters(p); rgbints = new int[width * height]; // try { mCamera.setPreviewDisplay(holder); } catch (IOException e) // { Log.e("Camera", "mCamera.setPreviewDisplay(holder);"); } mCamera.startPreview(); mCamera.setPreviewCallback(this); } } @Override public void surfaceDestroyed(SurfaceHolder holder) { synchronized (this) { try { if (mCamera != null) { //mHolder.removeCallback(this); mCamera.setPreviewCallback(null); mCamera.stopPreview(); isPreviewRunning = false; mCamera.release(); } } catch (Exception e) { Log.e("Camera", e.getMessage()); } } } @Override public void onPreviewFrame(byte[] data, Camera camera) { // Log.d("Camera", "Got a camera frame"); if (!isPreviewRunning) return; Canvas canvas = null; if (mHolder == null) { return; } try { synchronized (mHolder) { canvas = mHolder.lockCanvas(null); int canvasWidth = canvas.getWidth(); int canvasHeight = canvas.getHeight(); decodeYUV(rgbints, data, width, height); // draw the decoded image, centered on canvas canvas.drawBitmap(rgbints, 0, width, canvasWidth-((width+canvasWidth)>>1), canvasHeight-((height+canvasHeight)>>1), width, height, false, null); // use some color filter canvas.drawColor(mMultiplyColor, Mode.MULTIPLY); } } catch (Exception e){ e.printStackTrace(); } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (canvas != null) { mHolder.unlockCanvasAndPost(canvas); } } } /** * Decodes YUV frame to a buffer which can be use to create a bitmap. use * this for OS < FROYO which has a native YUV decoder decode Y, U, and V * values on the YUV 420 buffer described as YCbCr_422_SP by Android * * @param rgb * the outgoing array of RGB bytes * @param fg * the incoming frame bytes * @param width * of source frame * @param height * of source frame * @throws NullPointerException * @throws IllegalArgumentException */ public void decodeYUV(int[] out, byte[] fg, int width, int height) throws NullPointerException, IllegalArgumentException { int sz = width * height; if (out == null) throw new NullPointerException("buffer out is null"); if (out.length < sz) throw new IllegalArgumentException("buffer out size " + out.length + " < minimum " + sz); if (fg == null) throw new NullPointerException("buffer 'fg' is null"); if (fg.length < sz) throw new IllegalArgumentException("buffer fg size " + fg.length + " < minimum " + sz * 3 / 2); int i, j; int Y, Cr = 0, Cb = 0; for (j = 0; j < height; j++) { int pixPtr = j * width; final int jDiv2 = j >> 1; for (i = 0; i < width; i++) { Y = fg[pixPtr]; if (Y < 0) Y += 255; if ((i & 0x1) != 1) { final int cOff = sz + jDiv2 * width + (i >> 1) * 2; Cb = fg[cOff]; if (Cb < 0) Cb += 127; else Cb -= 128; Cr = fg[cOff + 1]; if (Cr < 0) Cr += 127; else Cr -= 128; } int R = Y + Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5); if (R < 0) R = 0; else if (R > 255) R = 255; int G = Y - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5); if (G < 0) G = 0; else if (G > 255) G = 255; int B = Y + Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6); if (B < 0) B = 0; else if (B > 255) B = 255; out[pixPtr++] = 0xff000000 + (B << 16) + (G << 8) + R; } } } private void showSupportedCameraFormats(Parameters p) { List<Integer> supportedPictureFormats = p.getSupportedPreviewFormats(); Log.d(TAG, "preview format:" + cameraFormatIntToString(p.getPreviewFormat())); for (Integer x : supportedPictureFormats) { Log.d(TAG, "suppoterd format: " + cameraFormatIntToString(x.intValue())); } } private String cameraFormatIntToString(int format) { switch (format) { case PixelFormat.JPEG: return "JPEG"; case PixelFormat.YCbCr_420_SP: return "NV21"; case PixelFormat.YCbCr_422_I: return "YUY2"; case PixelFormat.YCbCr_422_SP: return "NV16"; case PixelFormat.RGB_565: return "RGB_565"; default: return "Unknown:" + format; } } } 

要使用它,请运行你的活动的onCreate下面的代码:

  SurfaceView surfaceView = new MySurfaceView(this, null); RelativeLayout.LayoutParams layoutParams = new RelativeLayout.LayoutParams(RelativeLayout.LayoutParams.MATCH_PARENT, RelativeLayout.LayoutParams.MATCH_PARENT); surfaceView.setLayoutParams(layoutParams); mRelativeLayout.addView(surfaceView); 

你看过GPUImage吗?

它最初是由Brad Larson制作的OSX / iOS库,作为围绕OpenGL / ES的Objective-C包装存在。

https://github.com/BradLarson/GPUImage

Cyber​​Agent的用户已经制作了一个Android端口(它没有完整的function奇偶校验),这是一组在OpenGLES之上的Java包装。 这是相对较高的水平,很容易实现,与上面提到的许多相同的function…

https://github.com/Cyber​​Agent/android-gpuimage