編輯:關於Android編程
識別身份證信息需要用到圖像處理的知識,無奈水平不夠,於是尋找第三方接口,但是找到的都是收費的,後來找到一個叫雲脈的產品http://ocr.ccyunmai.com/,發現他可以免費使用15天,但是15天後就是按識別次數收費的,其價格十分昂貴,0.3元/次,對於苦逼的窮屌來說,這真是天價啊。
但是皇天不負有心人,雲脈提供了一個在線識別的demo,其地址為http://ocr.ccyunmai.com/idcard/,但是這個地址有什麼用呢,別急,作為專業抓數據出身的來說,我們可以利用該地址進行識別,而且不費一分錢。先打開該網址看看長什麼樣
我們利用雲脈提供的一張測試圖片上傳進行測試
在上傳前,記得打開開發者模式,Chrome裡按F12,切換到NetWork,點擊上傳,上傳完成後會返回識別結果,找到UploadImg.action<喎?/kf/ware/vc/" target="_blank" class="keylink">vcD4NCjxwPjxpbWcgYWx0PQ=="這裡寫圖片描述" src="/uploadfile/Collfiles/20150908/2015090811083757.png" title="\" />
點擊查看其請求體
我們著重看紅色方框裡的內容,只有我們將這些信息提供給該接口http://ocr.ccyunmai.com/UploadImg.action,只要身份證圖片正確,它便會給我們返回識別信息。我們要做的就是用程序模擬這個過程。
請求體裡需要傳遞Host,Origin,Referer,User-Agent,其直直接從浏覽器得到的信息中復制即可,請求方式是POST,POST的內容分為三部分,一個是callbackurl,其值為/idcard/,一個是action,其值為idcard,還有一個就是上傳的文件了,叫做img,其文件名就是我們上傳的文件名,這裡我的文件是test-idcard.jpg,然後其Content-Type是image/jpeg,接下來我們來模擬這個過程。
我們使用OkHttp作為網絡層,結合之前的文章Android OkHttp文件上傳與下載的進度監聽擴展進行擴展。
增加gradle依賴
compile 'com.squareup.okhttp:okhttp:2.5.0'
compile 'cn.edu.zafu:coreprogress:0.0.1'
compile 'org.jsoup:jsoup:1.8.3'
我們看到還依賴了jsoup,其實後續會用到它進行解析返回結果。
需要使用網絡進行上傳,並且需要文件的讀取,增加這兩個權限
初始化OkHttp,避免超時,設置超時時間
OkHttpClient mOkHttpClient=new OkHttpClient();
private void initClient() {
mOkHttpClient.setConnectTimeout(1000, TimeUnit.MINUTES);
mOkHttpClient.setReadTimeout(1000, TimeUnit.MINUTES);
mOkHttpClient.setWriteTimeout(1000, TimeUnit.MINUTES);
}
然後我們需要對一個變量進行賦值,讓它存儲本地的身份證圖片,其值為文件路徑
private String mPhotoPath=文件路徑;
接下來開始構造請求頭和POST的信息,上傳文件的過程中需要監聽進度,所以這裡使用了前文所說的庫
private void uploadAndRecognize() {
if (!TextUtils.isEmpty(mPhotoPath)){
File file=new File(mPhotoPath);
//構造請求體
RequestBody requestBody = new MultipartBuilder().type(MultipartBuilder.FORM)
.addPart(Headers.of(Content-Disposition, form-data; name=callbackurl), RequestBody.create(null, /idcard/))
.addPart(Headers.of(Content-Disposition, form-data; name=action), RequestBody.create(null, idcard))
.addPart(Headers.of(Content-Disposition, form-data; name=img; filename=idcardFront_user.jpg), RequestBody.create(MediaType.parse(image/jpeg), file))
.build();
//這個是ui線程回調,可直接操作UI
final UIProgressRequestListener uiProgressRequestListener = new UIProgressRequestListener() {
@Override
public void onUIRequestProgress(long bytesWrite, long contentLength, boolean done) {
Log.e(TAG, bytesWrite: + bytesWrite);
Log.e(TAG, contentLength + contentLength);
Log.e(TAG, (100 * bytesWrite) / contentLength + % done );
Log.e(TAG, done: + done);
Log.e(TAG, ================================);
//ui層回調
mProgressBar.setProgress((int) ((100 * bytesWrite) / contentLength));
}
};
//構造請求頭
final Request request = new Request.Builder()
.header(Host, ocr.ccyunmai.com)
.header(Origin, http://ocr.ccyunmai.com)
.header(Referer, http://ocr.ccyunmai.com/idcard/)
.header(User-Agent, Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2398.0 Safari/537.36)
.url(http://ocr.ccyunmai.com/UploadImg.action)
.post(ProgressHelper.addProgressRequestListener(requestBody, uiProgressRequestListener))
.build();
//開始請求
mOkHttpClient.newCall(request).enqueue(new Callback() {
@Override
public void onFailure(Request request, IOException e) {
Log.e(TAG, error);
}
@Override
public void onResponse(Response response) throws IOException {
String result=response.body().string();
}
});
}
}
請求成功後在onResponse裡會進行回調,局部變量拿到的就是最終的返回結果。
我們在浏覽器中查看下返回的信息的源代碼,以進一步便於解析識別結果
返回結果被包含在一個div中,其class為left,在div中還包含了一層fieldset,識別結果就在這裡面,於是,我們對result進行解析,使用的就是前面加入的依賴Jsoup
String result=response.body().string();
Document parse = Jsoup.parse(result);
Elements select = parse.select(div.left fieldset);
Log.e(TAG,select.text());
Document parse1 = Jsoup.parse(select.text());
StringBuilder builder=new StringBuilder();
String name=parse1.select(name).text();
String cardno=parse1.select(cardno).text();
String sex=parse1.select(sex).text();
String folk=parse1.select(folk).text();
String birthday=parse1.select(birthday).text();
String address=parse1.select(address).text();
String issue_authority=parse1.select(issue_authority).text();
String valid_period=parse1.select(valid_period).text();
builder.append(name:+name)
.append(
)
.append(cardno: + cardno)
.append(
)
.append(sex: + sex)
.append(
)
.append(folk: + folk)
.append(
)
.append(birthday: + birthday)
.append(
)
.append(address: + address)
.append(
)
.append(issue_authority: + issue_authority)
.append(
)
.append(valid_period: + valid_period)
.append(
);
Log.e(TAG, name: + name);
Log.e(TAG,cardno:+cardno);
Log.e(TAG,sex:+sex);
Log.e(TAG,folk:+folk);
Log.e(TAG,birthday:+birthday);
Log.e(TAG,address:+address);
Log.e(TAG,issue_authority:+issue_authority);
Log.e(TAG,valid_period:+valid_period);
很簡單有木有,所有信息都被抽取出來了,看下Log,看下識別結果是什麼。
其實識別的還是挺准的。識別是可以識別了,但是我們希望能夠自己拍攝照片然後上傳識別,就像這樣子。
這個就涉及到Android Camera和SurfaceView的知識了。在這之前,我們先編寫一個自動對焦的管理類。
/**
* 自動對焦
* User:lizhangqu([email protected])
* Date:2015-09-05
* Time: 11:11
*/
public class AutoFocusManager implements Camera.AutoFocusCallback{
private static final String TAG = AutoFocusManager.class.getSimpleName();
private static final long AUTO_FOCUS_INTERVAL_MS = 2000L;
private static final Collection FOCUS_MODES_CALLING_AF;
static {
FOCUS_MODES_CALLING_AF = new ArrayList(2);
FOCUS_MODES_CALLING_AF.add(Camera.Parameters.FOCUS_MODE_AUTO);
FOCUS_MODES_CALLING_AF.add(Camera.Parameters.FOCUS_MODE_MACRO);
}
private boolean stopped;
private boolean focusing;
private final boolean useAutoFocus;
private final Camera camera;
private AsyncTask outstandingTask;
public AutoFocusManager(Camera camera) {
this.camera = camera;
String currentFocusMode = camera.getParameters().getFocusMode();
useAutoFocus = FOCUS_MODES_CALLING_AF.contains(currentFocusMode);
Log.e(TAG, Current focus mode ' + currentFocusMode + '; use auto focus? + useAutoFocus);
start();
}
@Override
public synchronized void onAutoFocus(boolean success, Camera theCamera) {
focusing = false;
autoFocusAgainLater();
}
private synchronized void autoFocusAgainLater() {
if (!stopped && outstandingTask == null) {
AutoFocusTask newTask = new AutoFocusTask();
try {
newTask.executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR);
outstandingTask = newTask;
} catch (RejectedExecutionException ree) {
Log.e(TAG, Could not request auto focus, ree);
}
}
}
/**
* 開始自動對焦
*/
public synchronized void start() {
if (useAutoFocus) {
outstandingTask = null;
if (!stopped && !focusing) {
try {
camera.autoFocus(this);
focusing = true;
} catch (RuntimeException re) {
// Have heard RuntimeException reported in Android 4.0.x+; continue?
Log.e(TAG, Unexpected exception while focusing, re);
// Try again later to keep cycle going
autoFocusAgainLater();
}
}
}
}
private synchronized void cancelOutstandingTask() {
if (outstandingTask != null) {
if (outstandingTask.getStatus() != AsyncTask.Status.FINISHED) {
outstandingTask.cancel(true);
}
outstandingTask = null;
}
}
/**
* 停止自動對焦
*/
public synchronized void stop() {
stopped = true;
if (useAutoFocus) {
cancelOutstandingTask();
// Doesn't hurt to call this even if not focusing
try {
camera.cancelAutoFocus();
} catch (RuntimeException re) {
// Have heard RuntimeException reported in Android 4.0.x+; continue?
Log.e(TAG, Unexpected exception while cancelling focusing, re);
}
}
}
private final class AutoFocusTask extends AsyncTask
其實這個類是從Zxing中提取出來的,其功能就是每隔一段時間進行自動對焦,看代碼就能看懂,這裡不再累贅。
接下來就是和Camera相關的管理類,這個類也是從Zxing中提取出來進行了精簡
/**
* Camera管理類
* User:lizhangqu([email protected])
* Date:2015-09-05
* Time: 10:56
*/
public class CameraManager {
private static final String TAG = CameraManager.class.getName();
private Camera camera;
private Camera.Parameters parameters;
private AutoFocusManager autoFocusManager;
private int requestedCameraId = -1;
private boolean initialized;
private boolean previewing;
/**
* 打開攝像頭
*
* @param cameraId 攝像頭id
* @return Camera
*/
public Camera open(int cameraId) {
int numCameras = Camera.getNumberOfCameras();
if (numCameras == 0) {
Log.e(TAG, No cameras!);
return null;
}
boolean explicitRequest = cameraId >= 0;
if (!explicitRequest) {
// Select a camera if no explicit camera requested
int index = 0;
while (index < numCameras) {
Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
Camera.getCameraInfo(index, cameraInfo);
if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_BACK) {
break;
}
index++;
}
cameraId = index;
}
Camera camera;
if (cameraId < numCameras) {
Log.e(TAG, Opening camera # + cameraId);
camera = Camera.open(cameraId);
} else {
if (explicitRequest) {
Log.e(TAG, Requested camera does not exist: + cameraId);
camera = null;
} else {
Log.e(TAG, No camera facing back; returning camera #0);
camera = Camera.open(0);
}
}
return camera;
}
/**
* 打開camera
*
* @param holder SurfaceHolder
* @throws IOException IOException
*/
public synchronized void openDriver(SurfaceHolder holder)
throws IOException {
Log.e(TAG, openDriver);
Camera theCamera = camera;
if (theCamera == null) {
theCamera = open(requestedCameraId);
if (theCamera == null) {
throw new IOException();
}
camera = theCamera;
}
theCamera.setPreviewDisplay(holder);
if (!initialized) {
initialized = true;
parameters = camera.getParameters();
parameters.setPreviewSize(800, 600);
parameters.setPictureFormat(ImageFormat.JPEG);
parameters.setJpegQuality(100);
parameters.setPictureSize(800, 600);
theCamera.setParameters(parameters);
}
}
/**
* camera是否打開
*
* @return camera是否打開
*/
public synchronized boolean isOpen() {
return camera != null;
}
/**
* 關閉camera
*/
public synchronized void closeDriver() {
Log.e(TAG, closeDriver);
if (camera != null) {
camera.release();
camera = null;
}
}
/**
* 開始預覽
*/
public synchronized void startPreview() {
Log.e(TAG, startPreview);
Camera theCamera = camera;
if (theCamera != null && !previewing) {
theCamera.startPreview();
previewing = true;
autoFocusManager = new AutoFocusManager(camera);
}
}
/**
* 關閉預覽
*/
public synchronized void stopPreview() {
Log.e(TAG, stopPreview);
if (autoFocusManager != null) {
autoFocusManager.stop();
autoFocusManager = null;
}
if (camera != null && previewing) {
camera.stopPreview();
previewing = false;
}
}
/**
* 打開閃光燈
*/
public synchronized void openLight() {
Log.e(TAG, openLight);
if (camera != null) {
parameters = camera.getParameters();
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_TORCH);
camera.setParameters(parameters);
}
}
/**
* 關閉閃光燈
*/
public synchronized void offLight() {
Log.e(TAG, offLight);
if (camera != null) {
parameters = camera.getParameters();
parameters.setFlashMode(Camera.Parameters.FLASH_MODE_OFF);
camera.setParameters(parameters);
}
}
/**
* 拍照
*
* @param shutter ShutterCallback
* @param raw PictureCallback
* @param jpeg PictureCallback
*/
public synchronized void takePicture(final Camera.ShutterCallback shutter, final Camera.PictureCallback raw,
final Camera.PictureCallback jpeg) {
camera.takePicture(shutter, raw, jpeg);
}
}
我們看到上面的截圖的取景框是藍色邊框,上面還有一行提示的字,這是個自定義的SurfaceView,我們需要自己去實現繪制邏輯
/**
* 邊框繪制
* User:lizhangqu([email protected])
* Date:2015-09-04
* Time: 18:03
*/
public class PreviewBorderView extends SurfaceView implements SurfaceHolder.Callback, Runnable {
private int mScreenH;
private int mScreenW;
private Canvas mCanvas;
private Paint mPaint;
private Paint mPaintLine;
private SurfaceHolder mHolder;
private Thread mThread;
private static final String DEFAULT_TIPS_TEXT = 請將方框對准證件拍攝;
private static final int DEFAULT_TIPS_TEXT_SIZE = 16;
private static final int DEFAULT_TIPS_TEXT_COLOR = Color.GREEN;
/**
* 自定義屬性
*/
private float tipTextSize;
private int tipTextColor;
private String tipText;
public PreviewBorderView(Context context) {
this(context, null);
}
public PreviewBorderView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}
public PreviewBorderView(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
initAttrs(context, attrs);
init();
}
/**
* 初始化自定義屬性
*
* @param context Context
* @param attrs AttributeSet
*/
private void initAttrs(Context context, AttributeSet attrs) {
TypedArray a = context.obtainStyledAttributes(attrs, R.styleable.PreviewBorderView);
try {
tipTextSize = a.getDimension(R.styleable.PreviewBorderView_tipTextSize, TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, DEFAULT_TIPS_TEXT_SIZE, getResources().getDisplayMetrics()));
tipTextColor = a.getColor(R.styleable.PreviewBorderView_tipTextColor, DEFAULT_TIPS_TEXT_COLOR);
tipText = a.getString(R.styleable.PreviewBorderView_tipText);
if (tipText == null) {
tipText = DEFAULT_TIPS_TEXT;
}
} finally {
a.recycle();
}
}
/**
* 初始化繪圖變量
*/
private void init() {
this.mHolder = getHolder();
this.mHolder.addCallback(this);
this.mHolder.setFormat(PixelFormat.TRANSPARENT);
setZOrderOnTop(true);
this.mPaint = new Paint();
this.mPaint.setAntiAlias(true);
this.mPaint.setColor(Color.WHITE);
this.mPaint.setStyle(Paint.Style.FILL_AND_STROKE);
this.mPaint.setXfermode(new PorterDuffXfermode(PorterDuff.Mode.CLEAR));
this.mPaintLine = new Paint();
this.mPaintLine.setColor(tipTextColor);
this.mPaintLine.setStrokeWidth(3.0F);
setKeepScreenOn(true);
}
/**
* 繪制取景框
*/
private void draw() {
try {
this.mCanvas = this.mHolder.lockCanvas();
this.mCanvas.drawARGB(100, 0, 0, 0);
this.mScreenW = (this.mScreenH * 4 / 3);
Log.e(TAG,mScreenW:+mScreenW+ mScreenH:+mScreenH);
this.mCanvas.drawRect(new RectF(this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH * 1 / 6, this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6), this.mPaint);
this.mCanvas.drawLine(this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH * 1 / 6, this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH * 1 / 6 + 50, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH * 1 / 6, this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6 + 50, this.mScreenH * 1 / 6, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH * 1 / 6, this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH * 1 / 6 + 50, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH * 1 / 6, this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6 - 50, this.mScreenH * 1 / 6, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6, this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6 - 50, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6, this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6 + 50, this.mScreenH - this.mScreenH * 1 / 6, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6, this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6 - 50, this.mPaintLine);
this.mCanvas.drawLine(this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6, this.mScreenH - this.mScreenH * 1 / 6, this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6 - 50, this.mScreenH - this.mScreenH * 1 / 6, this.mPaintLine);
mPaintLine.setTextSize(tipTextSize);
mPaintLine.setAntiAlias(true);
mPaintLine.setDither(true);
float length = mPaintLine.measureText(tipText);
this.mCanvas.drawText(tipText, this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6 + this.mScreenH / 2 - length / 2, this.mScreenH * 1 / 6 - tipTextSize, mPaintLine);
Log.e(TAG, left: + (this.mScreenW / 2 - this.mScreenH * 2 / 3 + this.mScreenH * 1 / 6));
Log.e(TAG, top: + (this.mScreenH * 1 / 6));
Log.e(TAG, right: + (this.mScreenW / 2 + this.mScreenH * 2 / 3 - this.mScreenH * 1 / 6));
Log.e(TAG, bottom: + (this.mScreenH - this.mScreenH * 1 / 6));
} catch (Exception e) {
e.printStackTrace();
} finally {
if (this.mCanvas != null) {
this.mHolder.unlockCanvasAndPost(this.mCanvas);
}
}
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
//獲得寬高,開啟子線程繪圖
this.mScreenW = getWidth();
this.mScreenH = getHeight();
this.mThread = new Thread(this);
this.mThread.start();
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
//停止線程
try {
mThread.interrupt();
mThread = null;
} catch (Exception e) {
e.printStackTrace();
}
}
@Override
public void run() {
//子線程繪圖
draw();
}
}
這裡面用到了圖形的混合模式PorterDuff.Mode.CLEAR,注意SurfeceView的繪制是可以在子線程中進行的,還有一點就是取景寬外圍的整個SurfaceView的寬高比例是4:3,這個和Camera的預覽和圖像的比例設置的一樣,避免圖形預覽變形。這個類的繪制邏輯並不復雜,只不過繪制的長度等信息需要測量過。
還有幾個自定義屬性
最後剩下的就是預覽並拍照的Activity了,裡面有幾個輔助方法用於獲取長寬,然後重置了布局文件裡的控件的長寬比例為4:3,並且這個Activity需要給調用者返回結果,返回的數據可能有點大,Bundle傳遞數據最大不能超過1M,於是這裡直接傳遞保存的文件的路徑回去。在onCreate裡進行了Intent的獲取,獲取調用方傳來的參數,如果沒有傳過來,則使用默認值。
裡面有兩個按鈕,一個是拍照的,一個是打開或關閉閃光燈的,設置了事件監聽並調用對應的方法,拍照需要傳遞一個回調,這個回調裡面進行了數據的存儲與返回調用方結果。其他的都是一些初始化和銷毀的動作了,看下源碼就知道了。
public class CameraActivity extends AppCompatActivity implements SurfaceHolder.Callback {
private LinearLayout mLinearLayout;
private PreviewBorderView mPreviewBorderView;
private SurfaceView mSurfaceView;
private CameraManager cameraManager;
private boolean hasSurface;
private Intent mIntent;
private static final String DEFAULT_PATH = /sdcard/;
private static final String DEFAULT_NAME = default.jpg;
private static final String DEFAULT_TYPE = default;
private String filePath;
private String fileName;
private String type;
private Button take, light;
private boolean toggleLight;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_camera);
initIntent();
initLayoutParams();
}
private void initIntent() {
mIntent = getIntent();
filePath = mIntent.getStringExtra(path);
fileName = mIntent.getStringExtra(name);
type = mIntent.getStringExtra(type);
if (filePath == null) {
filePath = DEFAULT_PATH;
}
if (fileName == null) {
fileName = DEFAULT_NAME;
}
if (type == null) {
type = DEFAULT_TYPE;
}
Log.e(TAG, filePath + / + fileName + _ + type);
}
/**
* 重置surface寬高比例為3:4,不重置的話圖形會拉伸變形
*/
private void initLayoutParams() {
take = (Button) findViewById(R.id.take);
light = (Button) findViewById(R.id.light);
take.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
cameraManager.takePicture(null, null, myjpegCallback);
}
});
light.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (!toggleLight) {
toggleLight = true;
cameraManager.openLight();
} else {
toggleLight = false;
cameraManager.offLight();
}
}
});
//重置寬高,3:4
int widthPixels = getScreenWidth(this);
int heightPixels = getScreenHeight(this);
mLinearLayout = (LinearLayout) findViewById(R.id.linearlaout);
mPreviewBorderView = (PreviewBorderView) findViewById(R.id.borderview);
mSurfaceView = (SurfaceView) findViewById(R.id.surfaceview);
RelativeLayout.LayoutParams surfaceviewParams = (RelativeLayout.LayoutParams) mSurfaceView.getLayoutParams();
surfaceviewParams.width = heightPixels * 4 / 3;
surfaceviewParams.height = heightPixels;
mSurfaceView.setLayoutParams(surfaceviewParams);
RelativeLayout.LayoutParams borderViewParams = (RelativeLayout.LayoutParams) mPreviewBorderView.getLayoutParams();
borderViewParams.width = heightPixels * 4 / 3;
borderViewParams.height = heightPixels;
mPreviewBorderView.setLayoutParams(borderViewParams);
RelativeLayout.LayoutParams linearLayoutParams = (RelativeLayout.LayoutParams) mLinearLayout.getLayoutParams();
linearLayoutParams.width = widthPixels - heightPixels * 4 / 3;
linearLayoutParams.height = heightPixels;
mLinearLayout.setLayoutParams(linearLayoutParams);
Log.e(TAG,Screen width:+heightPixels * 4 / 3);
Log.e(TAG,Screen height:+heightPixels);
}
@Override
protected void onResume() {
super.onResume();
/**
* 初始化camera
*/
cameraManager = new CameraManager();
SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surfaceview);
SurfaceHolder surfaceHolder = surfaceView.getHolder();
if (hasSurface) {
// activity在paused時但不會stopped,因此surface仍舊存在;
// surfaceCreated()不會調用,因此在這裡初始化camera
initCamera(surfaceHolder);
} else {
// 重置callback,等待surfaceCreated()來初始化camera
surfaceHolder.addCallback(this);
}
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
if (!hasSurface) {
hasSurface = true;
initCamera(holder);
}
}
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
hasSurface = false;
}
/**
* 初始camera
*
* @param surfaceHolder SurfaceHolder
*/
private void initCamera(SurfaceHolder surfaceHolder) {
if (surfaceHolder == null) {
throw new IllegalStateException(No SurfaceHolder provided);
}
if (cameraManager.isOpen()) {
return;
}
try {
// 打開Camera硬件設備
cameraManager.openDriver(surfaceHolder);
// 創建一個handler來打開預覽,並拋出一個運行時異常
cameraManager.startPreview();
} catch (Exception ioe) {
}
}
@Override
protected void onPause() {
/**
* 停止camera,是否資源操作
*/
cameraManager.stopPreview();
cameraManager.closeDriver();
if (!hasSurface) {
SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surfaceview);
SurfaceHolder surfaceHolder = surfaceView.getHolder();
surfaceHolder.removeCallback(this);
}
super.onPause();
}
/**
* 拍照回調
*/
Camera.PictureCallback myjpegCallback = new Camera.PictureCallback() {
@Override
public void onPictureTaken(final byte[] data, Camera camera) {
// 根據拍照所得的數據創建位圖
final Bitmap bitmap = BitmapFactory.decodeByteArray(data, 0,
data.length);
int height = bitmap.getHeight();
int width = bitmap.getWidth();
final Bitmap bitmap1 = Bitmap.createBitmap(bitmap, (width - height) / 2, height / 6, height, height * 2 / 3);
Log.e(TAG,width:+width+ height:+height);
Log.e(TAG,x:+(width - height) / 2+ y:+height / 6+ width:+height+ height:+height * 2 / 3);
// 創建一個位於SD卡上的文件
File path=new File(filePath);
if (!path.exists()){
path.mkdirs();
}
File file = new File(path, type+_+fileName);
FileOutputStream outStream = null;
try {
// 打開指定文件對應的輸出流
outStream = new FileOutputStream(file);
// 把位圖輸出到指定文件中
bitmap1.compress(Bitmap.CompressFormat.JPEG,
100, outStream);
outStream.close();
} catch (Exception e) {
e.printStackTrace();
}
Intent intent = new Intent();
Bundle bundle = new Bundle();
bundle.putString(path, file.getAbsolutePath());
bundle.putString(type, type);
intent.putExtras(bundle);
setResult(RESULT_OK, intent);
CameraActivity.this.finish();
}
};
/**
* 獲得屏幕寬度,單位px
*
* @param context 上下文
* @return 屏幕寬度
*/
public int getScreenWidth(Context context) {
DisplayMetrics dm = context.getResources().getDisplayMetrics();
return dm.widthPixels;
}
/**
* 獲得屏幕高度
*
* @param context 上下文
* @return 屏幕除去通知欄的高度
*/
public int getScreenHeight(Context context) {
DisplayMetrics dm = context.getResources().getDisplayMetrics();
return dm.heightPixels-getStatusBarHeight(context);
}
/**
* 獲取通知欄高度
*
* @param context 上下文
* @return 通知欄高度
*/
public int getStatusBarHeight(Context context) {
int statusBarHeight = 0;
try {
Class clazz = Class.forName(com.android.internal.R$dimen);
Object obj = clazz.newInstance();
Field field = clazz.getField(status_bar_height);
int temp = Integer.parseInt(field.get(obj).toString());
statusBarHeight = context.getResources().getDimensionPixelSize(temp);
} catch (Exception e) {
e.printStackTrace();
}
return statusBarHeight;
}
}
申明Activity為橫屏模式以及拍照的權限等相關信息
在其他的Activity中直接調用即可
Intent intent = new Intent(MainActivity.this, CameraActivity.class);
String pathStr = mPath.getText().toString();
String nameStr = mName.getText().toString();
String typeStr = mType.getText().toString();
if (!TextUtils.isEmpty(pathStr)) {
intent.putExtra(path, pathStr);
}
if (!TextUtils.isEmpty(nameStr)) {
intent.putExtra(name, nameStr);
}
if (!TextUtils.isEmpty(typeStr)) {
intent.putExtra(type, typeStr);
}
startActivityForResult(intent, 100);
獲得返回結果並顯示在ImageView上
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
Log.e(TAG,onActivityResult);
if (requestCode == 100) {
if (resultCode == RESULT_OK) {
Bundle extras = data.getExtras();
String path=extras.getString(path);
String type=extras.getString(type);
Toast.makeText(getApplicationContext(),path:+ path + type: + type, Toast.LENGTH_LONG).show();
File file = new File(path);
FileInputStream inStream = null;
try {
inStream = new FileInputStream(file);
Bitmap bitmap = BitmapFactory.decodeStream(inStream);
mPhoto.setImageBitmap(bitmap);
inStream.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
super.onActivityResult(requestCode, resultCode, data);
}
最終的U如下所示
具體細節見源碼吧,由於csdn抽了,文件上傳不了,所以把代碼傳github了
本文實例講述了Android獲取手機屏幕大小的方法。分享給大家供大家參考,具體如下:這裡主要用了三個對象TextView ,Button ,以及DisplayMetric
本文的目的是用比較容易理解的方式,介紹一下整個Android項目的編譯。至少知道大概的編譯流程是怎麼樣的,項目裡面的Android.mk文件包含些什麼內容。makefil
本章只是寫了如何配置JDK,以及adt-bundle的配置。對於以前的adt-bundle的版本,會自帶CPU/ABI系統鏡像,經過本文所描述的兩個步驟後可以直接創建AV
APK中的XML為何不能直接打開,是否只是簡單的二進制文件,難道被加密了?為什麼AXMLPrinter2反編譯的時候竟然報錯了,如何解決?&nbs