Android教程網
  1. 首頁
  2. Android 技術
  3. Android 手機
  4. Android 系統教程
  5. Android 游戲
 Android教程網 >> Android技術 >> 關於Android編程 >> 深入剖析Android音頻之AudioTrack

深入剖析Android音頻之AudioTrack

編輯:關於Android編程

播放聲音可以用MediaPlayer和AudioTrack,兩者都提供了java API供應用開發者使用。雖然都可以播放聲音,但兩者還是有很大的區別的。其中最大的區別是MediaPlayer可以播放多種格式的聲音文件,例如MP3,AAC,WAV,OGG,MIDI等。MediaPlayer會在framework層創建對應的音頻解碼器。而AudioTrack只能播放已經解碼的PCM流,如果是文件的話只支持wav格式的音頻文件,因為wav格式的音頻文件大部分都是PCM流。AudioTrack不創建解碼器,所以只能播放不需要解碼的wav文件。當然兩者之間還是有緊密的聯系,MediaPlayer在framework層還是會創建AudioTrack,把解碼後的PCM數流傳遞給AudioTrack,AudioTrack再傳遞給AudioFlinger進行混音,然後才傳遞給硬件播放,所以是MediaPlayer包含了AudioTrack。使用AudioTrack播放音樂示例:

AudioTrack audio = new AudioTrack(
     AudioManager.STREAM_MUSIC, // 指定流的類型
     32000, // 設置音頻數據的采樣率 32k,如果是44.1k就是44100
     AudioFormat.CHANNEL_OUT_STEREO, // 設置輸出聲道為雙聲道立體聲,而CHANNEL_OUT_MONO類型是單聲道
     AudioFormat.ENCODING_PCM_16BIT, // 設置音頻數據塊是8位還是16位,這裡設置為16位。好像現在絕大多數的音頻都是16位的了
     AudioTrack.MODE_STREAM // 設置模式類型,在這裡設置為流類型,另外一種MODE_STATIC貌似沒有什麼效果
     );
audio.play(); // 啟動音頻設備,下面就可以真正開始音頻數據的播放了
// 打開mp3文件,讀取數據,解碼等操作省略 ...
byte[] buffer = new buffer[4096];
int count;
while(true)
{
    // 最關鍵的是將解碼後的數據,從緩沖區寫入到AudioTrack對象中
    audio.write(buffer, 0, 4096);
    if(文件結束) break;
}
//關閉並釋放資源
audio.stop();
audio.release();

\

AudioTrack構造過程

每一個音頻流對應著一個AudioTrack類的一個實例,每個AudioTrack會在創建時注冊到 AudioFlinger中,由AudioFlinger把所有的AudioTrack進行混合(Mixer),然後輸送到AudioHardware中進行播放,目前Android同時最多可以創建32個音頻流,也就是說,Mixer最多會同時處理32個AudioTrack的數據流。

\

frameworks\base\media\java\android\media\AudioTrack.java

/**
 * streamType:音頻流類型
 * sampleRateInHz:采樣率
 * channelConfig:音頻聲道
 * audioFormat:音頻格式
 * bufferSizeInBytes緩沖區大小:
 * mode:音頻數據加載模式
 * sessionId:會話id
 */
public AudioTrack(int streamType, int sampleRateInHz, int channelConfig, int audioFormat,
        int bufferSizeInBytes, int mode, int sessionId)
throws IllegalArgumentException {
    // mState already == STATE_UNINITIALIZED

    // remember which looper is associated with the AudioTrack instantiation
    Looper looper;
    if ((looper = Looper.myLooper()) == null) {
        looper = Looper.getMainLooper();
    }
    mInitializationLooper = looper;
    /**
     * 參數檢查
     * 1.檢查streamType是否為:STREAM_ALARM、STREAM_MUSIC、STREAM_RING、STREAM_SYSTEM、STREAM_VOICE_CALL、
     *  STREAM_NOTIFICATION、STREAM_BLUETOOTH_SCO、STREAM_BLUETOOTH_SCO,並賦值給mStreamType
     * 2.檢查sampleRateInHz是否在4000到48000之間,並賦值給mSampleRate
     * 3.設置mChannels: 
     *      CHANNEL_OUT_DEFAULT、CHANNEL_OUT_MONO、CHANNEL_CONFIGURATION_MONO ---> CHANNEL_OUT_MONO
     *      CHANNEL_OUT_STEREO、CHANNEL_CONFIGURATION_STEREO                  ---> CHANNEL_OUT_STEREO
     * 4.設置mAudioFormat: 
     *      ENCODING_PCM_16BIT、ENCODING_DEFAULT ---> ENCODING_PCM_16BIT
     *      ENCODING_PCM_8BIT ---> ENCODING_PCM_8BIT
     * 5.設置mDataLoadMode:
     *      MODE_STREAM
     *      MODE_STATIC
     */
    audioParamCheck(streamType, sampleRateInHz, channelConfig, audioFormat, mode);
    /**
     * buffer大小檢查,計算每幀字節大小,如果是ENCODING_PCM_16BIT,則為mChannelCount * 2
     * mNativeBufferSizeInFrames為幀數
     */
    audioBuffSizeCheck(bufferSizeInBytes);
    if (sessionId < 0) {
        throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
    }
    //進入native層初始化
    int[] session = new int[1];
    session[0] = sessionId;
    // native initialization
    int initResult = native_setup(new WeakReference(this),
            mStreamType, mSampleRate, mChannels, mAudioFormat,
            mNativeBufferSizeInBytes, mDataLoadMode, session);
    if (initResult != SUCCESS) {
        loge("Error code "+initResult+" when initializing AudioTrack.");
        return; // with mState == STATE_UNINITIALIZED
    }
    mSessionId = session[0];
    if (mDataLoadMode == MODE_STATIC) {
        mState = STATE_NO_STATIC_DATA;
    } else {
        mState = STATE_INITIALIZED;
    }
}

with audio session. Use this constructor when the AudioTrack must be attached to a particular audio session. The primary use of the audio session ID is to associate audio effects to a particular instance of AudioTrack: if an audio session ID is provided when creating an AudioEffect, this effect will be applied only to audio tracks and media players in the same session and not to the output mix. When an AudioTrack is created without specifying a session, it will create its own session which can be retreived by calling the getAudioSessionId() method. If a non-zero session ID is provided, this AudioTrack will share effects attached to this session with all other media players or audio tracks in the same session, otherwise a new session will be created for this track if none is supplied.

streamType

the type of the audio stream. See STREAM_VOICE_CALL,STREAM_SYSTEM,STREAM_RING,STREAM_MUSIC,STREAM_ALARM, andSTREAM_NOTIFICATION.

sampleRateInHz

the sample rate expressed in Hertz.

channelConfig

describes the configuration of the audio channels. SeeCHANNEL_OUT_MONO andCHANNEL_OUT_STEREO

audioFormat

the format in which the audio data is represented. SeeENCODING_PCM_16BIT andENCODING_PCM_8BIT

bufferSizeInBytes

the total size (in bytes) of the buffer where audio data is read from for playback. If using the AudioTrack in streaming mode, you can write data into this buffer in smaller chunks than this size. If using the AudioTrack in static mode, this is the maximum size of the sound that will be played for this instance. SeegetMinBufferSize(int, int, int) to determine the minimum required buffer size for the successful creation of an AudioTrack instance in streaming mode. Using values smaller than getMinBufferSize() will result in an initialization failure.

mode

streaming or static buffer. See MODE_STATIC andMODE_STREAM

sessionId

Id of audio session the AudioTrack must be attached to

AudioTrack有兩種數據加載模式:

    MODE_STREAM

    在這種模式下,應用程序持續地write音頻數據流到AudioTrack中,並且write動作將阻塞直到數據流從Java層傳輸到native層,同時加入到播放隊列中。這種模式適用於播放大音頻數據,但該模式也造成了一定的延時;

      MODE_STATIC

      在播放之前,先把所有數據一次性write到AudioTrack的內部緩沖區中。適用於播放內存占用小、延時要求較高的音頻數據。

      frameworks\base\core\jni\android_media_AudioTrack.cpp

      static int android_media_AudioTrack_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,jint streamType, jint sampleRateInHertz, jint javaChannelMask,
              jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession)
      {
          ALOGV("sampleRate=%d, audioFormat(from Java)=%d, channel mask=%x, buffSize=%d",
              sampleRateInHertz, audioFormat, javaChannelMask, buffSizeInBytes);
          int afSampleRate;//采樣率
          int afFrameCount;//幀數
      	//通過AudioSystem從AudioPolicyService中讀取對應音頻流類型的幀數
          if (AudioSystem::getOutputFrameCount(&afFrameCount, (audio_stream_type_t) streamType) != NO_ERROR) {
              ALOGE("Error creating AudioTrack: Could not get AudioSystem frame count.");
              return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
          }
          //通過AudioSystem從AudioPolicyService中讀取對應音頻流類型的采樣率
          if (AudioSystem::getOutputSamplingRate(&afSampleRate, (audio_stream_type_t) streamType) != NO_ERROR) {
              ALOGE("Error creating AudioTrack: Could not get AudioSystem sampling rate.");
              return AUDIOTRACK_ERROR_SETUP_AUDIOSYSTEM;
          }
          // Java channel masks don't map directly to the native definition, but it's a simple shift
          // to skip the two deprecated channel configurations "default" and "mono".
          uint32_t nativeChannelMask = ((uint32_t)javaChannelMask) >> 2;
      	//判斷是否為輸出通道
          if (!audio_is_output_channel(nativeChannelMask)) {
              ALOGE("Error creating AudioTrack: invalid channel mask.");
              return AUDIOTRACK_ERROR_SETUP_INVALIDCHANNELMASK;
          }
          //得到通道個數,popcount函數用於統計一個整數中有多少位為1
          int nbChannels = popcount(nativeChannelMask);
          // check the stream type
          audio_stream_type_t atStreamType;
          switch (streamType) {
          case AUDIO_STREAM_VOICE_CALL:
          case AUDIO_STREAM_SYSTEM:
          case AUDIO_STREAM_RING:
          case AUDIO_STREAM_MUSIC:
          case AUDIO_STREAM_ALARM:
          case AUDIO_STREAM_NOTIFICATION:
          case AUDIO_STREAM_BLUETOOTH_SCO:
          case AUDIO_STREAM_DTMF:
              atStreamType = (audio_stream_type_t) streamType;
              break;
          default:
              ALOGE("Error creating AudioTrack: unknown stream type.");
              return AUDIOTRACK_ERROR_SETUP_INVALIDSTREAMTYPE;
          }
          // This function was called from Java, so we compare the format against the Java constants
          if ((audioFormat != javaAudioTrackFields.PCM16) && (audioFormat != javaAudioTrackFields.PCM8)) {
              ALOGE("Error creating AudioTrack: unsupported audio format.");
              return AUDIOTRACK_ERROR_SETUP_INVALIDFORMAT;
          }
          // for the moment 8bitPCM in MODE_STATIC is not supported natively in the AudioTrack C++ class so we declare everything as 16bitPCM, the 8->16bit conversion for MODE_STATIC will be handled in android_media_AudioTrack_native_write_byte()
          if ((audioFormat == javaAudioTrackFields.PCM8)
              && (memoryMode == javaAudioTrackFields.MODE_STATIC)) {
              ALOGV("android_media_AudioTrack_native_setup(): requesting MODE_STATIC for 8bit \
                  buff size of %dbytes, switching to 16bit, buff size of %dbytes",
                  buffSizeInBytes, 2*buffSizeInBytes);
              audioFormat = javaAudioTrackFields.PCM16;
              // we will need twice the memory to store the data
              buffSizeInBytes *= 2;
          }
          //根據不同的采樣方式得到一個采樣點的字節數
          int bytesPerSample = audioFormat == javaAudioTrackFields.PCM16 ? 2 : 1;
          audio_format_t format = audioFormat == javaAudioTrackFields.PCM16 ?
                  AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT;
          //根據buffer大小反向計算幀數  , 一幀大小=一個采樣點字節數 * 聲道數
          int frameCount = buffSizeInBytes / (nbChannels * bytesPerSample);
          //判斷參數的合法性
          jclass clazz = env->GetObjectClass(thiz);
          if (clazz == NULL) {
              ALOGE("Can't find %s when setting up callback.", kClassPathName);
              return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
          }
          if (jSession == NULL) {
              ALOGE("Error creating AudioTrack: invalid session ID pointer");
              return AUDIOTRACK_ERROR;
          }
          jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
          if (nSession == NULL) {
              ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
              return AUDIOTRACK_ERROR;
          }
          int sessionId = nSession[0];
          env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
          nSession = NULL;
          // create the native AudioTrack object
          sp lpTrack = new AudioTrack();
          if (lpTrack == NULL) {
              ALOGE("Error creating uninitialized AudioTrack");
              return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
          }
          // 創建存儲音頻數據的容器
          AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();
          lpJniStorage->mStreamType = atStreamType;
          //將Java層的AudioTrack引用保存到AudioTrackJniStorage中
          lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
          // we use a weak reference so the AudioTrack object can be garbage collected.
          lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
          lpJniStorage->mCallbackData.busy = false;
          //初始化不同模式下的native AudioTrack對象 
          if (memoryMode == javaAudioTrackFields.MODE_STREAM) { //stream模式
              lpTrack->set( 
                  atStreamType,// stream type
                  sampleRateInHertz,
                  format,// word length, PCM
                  nativeChannelMask,
                  frameCount,
                  AUDIO_OUTPUT_FLAG_NONE,
                  audioCallback, 
                  &(lpJniStorage->mCallbackData),//callback, callback data (user)
                  0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                  0,//stream模式下的共享內存在AudioFlinger中創建
                  true,// thread can call Java
                  sessionId);// audio session ID
          } else if (memoryMode == javaAudioTrackFields.MODE_STATIC) {//static模式
              // 為AudioTrack分配共享內存區域 
              if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
                  ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
                  goto native_init_failure;
              }
              lpTrack->set(
                  atStreamType,// stream type
                  sampleRateInHertz,
                  format,// word length, PCM
                  nativeChannelMask,
                  frameCount,
                  AUDIO_OUTPUT_FLAG_NONE,
                  audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
                  0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
                  lpJniStorage->mMemBase,// shared mem
                  true,// thread can call Java
                  sessionId);// audio session ID
          }
          if (lpTrack->initCheck() != NO_ERROR) {
              ALOGE("Error initializing AudioTrack");
              goto native_init_failure;
          }
          nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
          if (nSession == NULL) {
              ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
              goto native_init_failure;
          }
          // read the audio session ID back from AudioTrack in case we create a new session
          nSession[0] = lpTrack->getSessionId();
          env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
          nSession = NULL;
          {   // scope for the lock
              Mutex::Autolock l(sLock);
              sAudioTrackCallBackCookies.add(&lpJniStorage->mCallbackData);
          }
          // save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field
          // of the Java object (in mNativeTrackInJavaObj)
          setAudioTrack(env, thiz, lpTrack);
          // save the JNI resources so we can free them later
          //ALOGV("storing lpJniStorage: %x\n", (int)lpJniStorage);
          env->SetIntField(thiz, javaAudioTrackFields.jniData, (int)lpJniStorage);
          return AUDIOTRACK_SUCCESS;
          // failures:
      native_init_failure:
          if (nSession != NULL) {
              env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
          }
          env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_class);
          env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_ref);
          delete lpJniStorage;
          env->SetIntField(thiz, javaAudioTrackFields.jniData, 0);
          return AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
      }
      

      1. 檢查音頻參數;

      2. 創建一個AudioTrack(native)對象;

      3. 創建一個AudioTrackJniStorage對象;

      4. 調用set函數初始化AudioTrack;

      buffersize = frameCount * 每幀數據量 = frameCount * (Channel數 * 每個Channel數據量)

      構造native AudioTrack

      frameworks\av\media\libmedia\AudioTrack.cpp

      AudioTrack::AudioTrack(): mStatus(NO_INIT),
        mIsTimed(false),
        mPreviousPriority(ANDROID_PRIORITY_NORMAL),
        mPreviousSchedulingGroup(SP_DEFAULT),
        mCblk(NULL)
      {
      }
      

      構造AudioTrackJniStorage

      AudioTrackJniStorage是音頻數據存儲的容器,是對匿名共享內存的封裝。

      struct audiotrack_callback_cookie {
          jclass      audioTrack_class;
          jobject     audioTrack_ref;//Java層AudioTrack對象引用
          bool        busy;//忙判斷
          Condition   cond;//互斥量
      };
      
      class AudioTrackJniStorage {
          public:
              sp         mMemHeap;
              sp             mMemBase;
              audiotrack_callback_cookie mCallbackData;
              audio_stream_type_t        mStreamType;
      
          AudioTrackJniStorage() {
              mCallbackData.audioTrack_class = 0;
              mCallbackData.audioTrack_ref = 0;
              mStreamType = AUDIO_STREAM_DEFAULT;
          }
      
          ~AudioTrackJniStorage() {
              mMemBase.clear();
              mMemHeap.clear();
          }
          /**
           * 分配一塊指定大小的匿名共享內存
           * @param sizeInBytes:匿名共享內存大小
           * @return
           */
          bool allocSharedMem(int sizeInBytes) {
          	 //創建一個匿名共享內存
              mMemHeap = new MemoryHeapBase(sizeInBytes, 0, "AudioTrack Heap Base");
              if (mMemHeap->getHeapID() < 0) {
                  return false;
              }
              mMemBase = new MemoryBase(mMemHeap, 0, sizeInBytes);
              return true;
          }
      };
      
      /**
       * 創建匿名共享內存區域
       * @param size:匿名共享內存大小
       * @param flags:創建標志位
       * @param name:匿名共享內存名稱
       */
      MemoryHeapBase::MemoryHeapBase(size_t size, uint32_t flags, char const * name)
      : mFD(-1), mSize(0), mBase(MAP_FAILED), mFlags(flags),
        mDevice(0), mNeedUnmap(false), mOffset(0)
      {
      	//獲取內存頁大小
      	const size_t pagesize = getpagesize();
      	//字節對齊
      	size = ((size + pagesize-1) & ~(pagesize-1));
      	/* 創建共享內存,打開/dev/ashmem設備,得到一個文件描述符 */
      	int fd = ashmem_create_region(name == NULL ? "MemoryHeapBase" : name, size);
      	ALOGE_IF(fd<0, "error creating ashmem region: %s", strerror(errno));
      	if (fd >= 0) {
      		//通過mmap將匿名共享內存映射到當前進程地址空間
      	    if (mapfd(fd, size) == NO_ERROR) {
      	        if (flags & READ_ONLY) {
      	            ashmem_set_prot_region(fd, PROT_READ);
      	        }
      	    }
      	}
      }
      

      初始化AudioTrack

      為AudioTrack設置音頻參數信息,在Android4.4中,增加了一個參數transfer_type用於指定音頻數據的傳輸方式,Android4.4定義了4種音頻數據傳輸方式:

      enum transfer_type {

      TRANSFER_DEFAULT, // not specified explicitly; determine from the other parameters

      TRANSFER_CALLBACK, // callback EVENT_MORE_DATA

      TRANSFER_OBTAIN, // FIXME deprecated: call obtainBuffer() and releaseBuffer()

      TRANSFER_SYNC, // synchronous write()

      TRANSFER_SHARED, // shared memory

      };

      /**
       * 初始化AudioTrack
       * @param streamType  音頻流類型
       * @param sampleRate  采樣率
       * @param format      音頻格式
       * @param channelMask 輸出聲道
       * @param frameCount  幀數
       * @param flags       輸出標志位
       * @param cbf   Callback function. If not null, this function is called periodically
       *   to provide new data and inform of marker, position updates, etc.
       * @param user   Context for use by the callback receiver.
       * @param notificationFrames   The callback function is called each time notificationFrames          *  PCM frames have been consumed from track input buffer.
       * @param sharedBuffer 共享內存 
       * @param threadCanCallJava 
       * @param sessionId  	            
       * @return
       */
      status_t AudioTrack::set(
              audio_stream_type_t streamType,
              uint32_t sampleRate,
              audio_format_t format,
              audio_channel_mask_t channelMask,
              int frameCountInt,
              audio_output_flags_t flags,
              callback_t cbf,
              void* user,
              int notificationFrames,
              const sp& sharedBuffer,
              bool threadCanCallJava,
              int sessionId,
              transfer_type transferType,
              const audio_offload_info_t *offloadInfo,
              int uid)
      {
      	//設置音頻數據傳輸類型
          switch (transferType) {
          case TRANSFER_DEFAULT:
              if (sharedBuffer != 0) {
                  transferType = TRANSFER_SHARED;
              } else if (cbf == NULL || threadCanCallJava) {
                  transferType = TRANSFER_SYNC;
              } else {
                  transferType = TRANSFER_CALLBACK;
              }
              break;
          case TRANSFER_CALLBACK:
              if (cbf == NULL || sharedBuffer != 0) {
                  ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0");
                  return BAD_VALUE;
              }
              break;
          case TRANSFER_OBTAIN:
          case TRANSFER_SYNC:
              if (sharedBuffer != 0) {
                  ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0");
                  return BAD_VALUE;
              }
              break;
          case TRANSFER_SHARED:
              if (sharedBuffer == 0) {
                  ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0");
                  return BAD_VALUE;
              }
              break;
          default:
              ALOGE("Invalid transfer type %d", transferType);
              return BAD_VALUE;
          }
          mTransfer = transferType;
          // FIXME "int" here is legacy and will be replaced by size_t later
          if (frameCountInt < 0) {
              ALOGE("Invalid frame count %d", frameCountInt);
              return BAD_VALUE;
          }
          size_t frameCount = frameCountInt;
          ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
                  sharedBuffer->size());
          ALOGV("set() streamType %d frameCount %u flags %04x", streamType, frameCount, flags);
          AutoMutex lock(mLock);
          // invariant that mAudioTrack != 0 is true only after set() returns successfully
          if (mAudioTrack != 0) {
              ALOGE("Track already in use");
              return INVALID_OPERATION;
          }
          mOutput = 0;
          // 音頻流類型設置
          if (streamType == AUDIO_STREAM_DEFAULT) {
              streamType = AUDIO_STREAM_MUSIC;
          }
          //根據音頻流類型從AudioPolicyService中得到對應的音頻采樣率
          if (sampleRate == 0) {
              uint32_t afSampleRate;
              if (AudioSystem::getOutputSamplingRate(&afSampleRate, streamType) != NO_ERROR) {
                  return NO_INIT;
              }
              sampleRate = afSampleRate;
          }
          mSampleRate = sampleRate;
          //音頻格式設置
          if (format == AUDIO_FORMAT_DEFAULT) {
              format = AUDIO_FORMAT_PCM_16_BIT;
          }
          //如果沒有設置聲道,則默認設置為立體聲通道
          if (channelMask == 0) {
              channelMask = AUDIO_CHANNEL_OUT_STEREO;
          }
          // validate parameters
          if (!audio_is_valid_format(format)) {
              ALOGE("Invalid format %d", format);
              return BAD_VALUE;
          }
          // AudioFlinger does not currently support 8-bit data in shared memory
          if (format == AUDIO_FORMAT_PCM_8_BIT && sharedBuffer != 0) {
              ALOGE("8-bit data in shared memory is not supported");
              return BAD_VALUE;
          }
          // force direct flag if format is not linear PCM
          // or offload was requested
          if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
                  || !audio_is_linear_pcm(format)) {
              ALOGV( (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
                          ? "Offload request, forcing to Direct Output"
                          : "Not linear PCM, forcing to Direct Output");
              flags = (audio_output_flags_t)
                      // FIXME why can't we allow direct AND fast?
                      ((flags | AUDIO_OUTPUT_FLAG_DIRECT) & ~AUDIO_OUTPUT_FLAG_FAST);
          }
          // only allow deep buffering for music stream type
          if (streamType != AUDIO_STREAM_MUSIC) {
              flags = (audio_output_flags_t)(flags &~AUDIO_OUTPUT_FLAG_DEEP_BUFFER);
          }
          //輸出聲道合法性檢查
          if (!audio_is_output_channel(channelMask)) {
              ALOGE("Invalid channel mask %#x", channelMask);
              return BAD_VALUE;
          }
          mChannelMask = channelMask;
          //計算聲道個數
          uint32_t channelCount = popcount(channelMask);
          mChannelCount = channelCount;
          if (audio_is_linear_pcm(format)) {
              mFrameSize = channelCount * audio_bytes_per_sample(format);
              mFrameSizeAF = channelCount * sizeof(int16_t);
          } else {
              mFrameSize = sizeof(uint8_t);
              mFrameSizeAF = sizeof(uint8_t);
          }
          /**
           * audio_io_handle_t是一個整形值,用於標示音頻播放線程,這裡更加音頻參數
           * 從AudioFlinger中查找用於播放此音頻的播放線程,並返回該播放線程的ID值
           */
          audio_io_handle_t output = AudioSystem::getOutput(
                                          streamType,
                                          sampleRate, format, channelMask,
                                          flags,
                                          offloadInfo);
          if (output == 0) {
              ALOGE("Could not get audio output for stream type %d", streamType);
              return BAD_VALUE;
          }
          //AudioTrack初始化
          mVolume[LEFT] = 1.0f;
          mVolume[RIGHT] = 1.0f;
          mSendLevel = 0.0f;
          mFrameCount = frameCount;
          mReqFrameCount = frameCount;
          mNotificationFramesReq = notificationFrames;
          mNotificationFramesAct = 0;
          mSessionId = sessionId;
          if (uid == -1 || (IPCThreadState::self()->getCallingPid() != getpid())) {
              mClientUid = IPCThreadState::self()->getCallingUid();
          } else {
              mClientUid = uid;
          }
          mAuxEffectId = 0;
          mFlags = flags;
          mCbf = cbf;
          //如果設置了提供音頻數據的回調函數,則啟動AudioTrackThread線程來提供音頻數據
          if (cbf != NULL) {
              mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
              mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
          }
          // create the IAudioTrack
          status_t status = createTrack_l(streamType,
                                        sampleRate,
                                        format,
                                        frameCount,
                                        flags,
                                        sharedBuffer,
                                        output,
                                        0 /*epoch*/);
          if (status != NO_ERROR) {
              if (mAudioTrackThread != 0) {
                  mAudioTrackThread->requestExit();   // see comment in AudioTrack.h
                  mAudioTrackThread->requestExitAndWait();
                  mAudioTrackThread.clear();
              }
              //Use of direct and offloaded output streams is ref counted by audio policy manager.
              // As getOutput was called above and resulted in an output stream to be opened,
              // we need to release it.
              AudioSystem::releaseOutput(output);
              return status;
          }
          mStatus = NO_ERROR;
          mStreamType = streamType;
          mFormat = format;
          mSharedBuffer = sharedBuffer;
          mState = STATE_STOPPED;
          mUserData = user;
          mLoopPeriod = 0;
          mMarkerPosition = 0;
          mMarkerReached = false;
          mNewPosition = 0;
          mUpdatePeriod = 0;
          AudioSystem::acquireAudioSessionId(mSessionId);
          mSequence = 1;
          mObservedSequence = mSequence;
          mInUnderrun = false;
          mOutput = output;
          return NO_ERROR;
      }
      

      我們知道,AudioPolicyService啟動時加載了系統支持的所有音頻接口,並且打開了默認的音頻輸出,打開音頻輸出時,調用AudioFlinger::openOutput()函數為當前打開的音頻輸出接口創建一個PlaybackThread線程,同時為該線程分配一個全局唯一的audio_io_handle_t值,並以鍵值對的形式保存在AudioFlinger的成員變量mPlaybackThreads中。在這裡首先根據音頻參數通過調用AudioSystem::getOutput()函數得到當前音頻輸出接口的PlaybackThread線程id號,同時傳遞給createTrack函數用於創建Track。AudioTrack在AudioFlinger中是以Track來管理的。不過因為它們之間是跨進程的關系,因此需要一個“橋梁”來維護,這個溝通的媒介是IAudioTrack。函數createTrack_l除了為AudioTrack在AudioFlinger中申請一個Track外,還會建立兩者間IAudioTrack橋梁。

      獲取音頻輸出

      獲取音頻輸出就是根據音頻參數如采樣率、聲道、格式等從已經打開的音頻輸出描述符列表中查找合適的音頻輸出AudioOutputDescriptor,並返回該音頻輸出在AudioFlinger中創建的播放線程id號,如果沒有合適當前音頻輸出參數的AudioOutputDescriptor,則請求AudioFlinger打開一個新的音頻輸出通道,並為當前音頻輸出創建對應的播放線程,返回該播放線程的id號。具體過程請參考Android AudioPolicyService服務啟動過程中的打開輸出小節。

      創建AudioTrackThread線程

      初始化AudioTrack時,如果audioCallback為Null,就會創建AudioTrackThread線程。

      AudioTrack支持兩種數據輸入方式:

      1) Push方式:用戶主動write,MediaPlayerService通常采用此方式;

      2) Pull方式: AudioTrackThread線程通過audioCallback回調函數主動從用戶那裡獲取數據,ToneGenerator就是采用這種方式;

      bool AudioTrack::AudioTrackThread::threadLoop()
      {
          {
              AutoMutex _l(mMyLock);
              if (mPaused) {
                  mMyCond.wait(mMyLock);
                  // caller will check for exitPending()
                  return true;
              }
      }
      //調用創建當前AudioTrackThread線程的AudioTrack的processAudioBuffer函數
          if (!mReceiver.processAudioBuffer(this)) {
              pause();
          }
          return true;
      }
      

      申請Track

      音頻播放需要AudioTrack寫入音頻數據,同時需要AudioFlinger完成混音,因此需要在AudioTrack與AudioFlinger之間建立數據通道,而AudioTrack與AudioFlinger又分屬不同的進程空間,Android系統采用Binder通信方式來搭建它們之間的橋梁。

      status_t AudioTrack::createTrack_l(
              audio_stream_type_t streamType,
              uint32_t sampleRate,
              audio_format_t format,
              size_t frameCount,
              audio_output_flags_t flags,
              const sp& sharedBuffer,
              audio_io_handle_t output,
              size_t epoch)
      {
          status_t status;
          //得到AudioFlinger的代理對象
          const sp& audioFlinger = AudioSystem::get_audio_flinger();
          if (audioFlinger == 0) {
              ALOGE("Could not get audioflinger");
              return NO_INIT;
          }
          //得到輸出時延
          uint32_t afLatency;
          status = AudioSystem::getLatency(output, streamType, &afLatency);
          if (status != NO_ERROR) {
              ALOGE("getLatency(%d) failed status %d", output, status);
              return NO_INIT;
          }
          //得到音頻幀數
          size_t afFrameCount;
          status = AudioSystem::getFrameCount(output, streamType, &afFrameCount);
          if (status != NO_ERROR) {
              ALOGE("getFrameCount(output=%d, streamType=%d) status %d", output, streamType, status);
              return NO_INIT;
          }
          //得到采樣率
          uint32_t afSampleRate;
          status = AudioSystem::getSamplingRate(output, streamType, &afSampleRate);
          if (status != NO_ERROR) {
              ALOGE("getSamplingRate(output=%d, streamType=%d) status %d", output, streamType, status);
              return NO_INIT;
          }
          // Client decides whether the track is TIMED (see below), but can only express a preference
          // for FAST.  Server will perform additional tests.
          if ((flags & AUDIO_OUTPUT_FLAG_FAST) && !(
                  // either of these use cases:
                  // use case 1: shared buffer
                  (sharedBuffer != 0) ||
                  // use case 2: callback handler
                  (mCbf != NULL))) {
              ALOGW("AUDIO_OUTPUT_FLAG_FAST denied by client");
              // once denied, do not request again if IAudioTrack is re-created
              flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_FAST);
              mFlags = flags;
          }
          ALOGV("createTrack_l() output %d afLatency %d", output, afLatency);
          // The client's AudioTrack buffer is divided into n parts for purpose of wakeup by server, where
          //  n = 1   fast track; nBuffering is ignored
          //  n = 2   normal track, no sample rate conversion
          //  n = 3   normal track, with sample rate conversion
          //          (pessimistic; some non-1:1 conversion ratios don't actually need triple-buffering)
          //  n > 3   very high latency or very small notification interval; nBuffering is ignored
          const uint32_t nBuffering = (sampleRate == afSampleRate) ? 2 : 3;
          mNotificationFramesAct = mNotificationFramesReq;
          if (!audio_is_linear_pcm(format)) {
              if (sharedBuffer != 0) {//static模式
                  // Same comment as below about ignoring frameCount parameter for set()
                  frameCount = sharedBuffer->size();
              } else if (frameCount == 0) {
                  frameCount = afFrameCount;
              }
              if (mNotificationFramesAct != frameCount) {
                  mNotificationFramesAct = frameCount;
              }
          } else if (sharedBuffer != 0) {// static模式
              // Ensure that buffer alignment matches channel count
              // 8-bit data in shared memory is not currently supported by AudioFlinger
              size_t alignment = /* format == AUDIO_FORMAT_PCM_8_BIT ? 1 : */ 2;
              if (mChannelCount > 1) {
                  // More than 2 channels does not require stronger alignment than stereo
                  alignment <<= 1;
              }
              if (((size_t)sharedBuffer->pointer() & (alignment - 1)) != 0) {
                  ALOGE("Invalid buffer alignment: address %p, channel count %u",
                          sharedBuffer->pointer(), mChannelCount);
                  return BAD_VALUE;
              }
              // When initializing a shared buffer AudioTrack via constructors,
              // there's no frameCount parameter.
              // But when initializing a shared buffer AudioTrack via set(),
              // there _is_ a frameCount parameter.  We silently ignore it.
              frameCount = sharedBuffer->size()/mChannelCount/sizeof(int16_t);
          } else if (!(flags & AUDIO_OUTPUT_FLAG_FAST)) {
              // FIXME move these calculations and associated checks to server
              // Ensure that buffer depth covers at least audio hardware latency
              uint32_t minBufCount = afLatency / ((1000 * afFrameCount)/afSampleRate);
              ALOGV("afFrameCount=%d, minBufCount=%d, afSampleRate=%u, afLatency=%d",
                      afFrameCount, minBufCount, afSampleRate, afLatency);
              if (minBufCount <= nBuffering) {
                  minBufCount = nBuffering;
              }
              size_t minFrameCount = (afFrameCount*sampleRate*minBufCount)/afSampleRate;
              ALOGV("minFrameCount: %u, afFrameCount=%d, minBufCount=%d, sampleRate=%u, afSampleRate=%u"", afLatency=%d",minFrameCount, afFrameCount, minBufCount, sampleRate, afSampleRate, afLatency);
              if (frameCount == 0) {
                  frameCount = minFrameCount;
              } else if (frameCount < minFrameCount) {
                  // not ALOGW because it happens all the time when playing key clicks over A2DP
                  ALOGV("Minimum buffer size corrected from %d to %d",
                           frameCount, minFrameCount);
                  frameCount = minFrameCount;
              }
              // Make sure that application is notified with sufficient margin before underrun
              if (mNotificationFramesAct == 0 || mNotificationFramesAct > frameCount/nBuffering) {
                  mNotificationFramesAct = frameCount/nBuffering;
              }
          } else {
              // For fast tracks, the frame count calculations and checks are done by server
          }
          IAudioFlinger::track_flags_t trackFlags = IAudioFlinger::TRACK_DEFAULT;
          if (mIsTimed) {
              trackFlags |= IAudioFlinger::TRACK_TIMED;
          }
          pid_t tid = -1;
          if (flags & AUDIO_OUTPUT_FLAG_FAST) {
              trackFlags |= IAudioFlinger::TRACK_FAST;
              if (mAudioTrackThread != 0) {
                  tid = mAudioTrackThread->getTid();
              }
          }
          if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
              trackFlags |= IAudioFlinger::TRACK_OFFLOAD;
          }
          //向AudioFlinger發送createTrack請求,在stream模式下sharedBuffer為空,output為AudioFlinger中播放線程的id號
          sp track = audioFlinger->createTrack(streamType,
                                                            sampleRate,
                                                            // AudioFlinger only sees 16-bit PCM
                                                            format == AUDIO_FORMAT_PCM_8_BIT ?
                                                                    AUDIO_FORMAT_PCM_16_BIT : format,
                                                            mChannelMask,
                                                            frameCount,
                                                            &trackFlags,
                                                            sharedBuffer,
                                                            output,
                                                            tid,
                                                            &mSessionId,
                                                            mName,
                                                            mClientUid,
                                                            &status);
          if (track == 0) {
              ALOGE("AudioFlinger could not create track, status: %d", status);
              return status;
          }
          //AudioFlinger創建Tack對象時會分配一塊共享內存,這裡得到這塊共享內存的代理對象BpMemory
          sp iMem = track->getCblk();
          if (iMem == 0) {
              ALOGE("Could not get control block");
              return NO_INIT;
          }
          // invariant that mAudioTrack != 0 is true only after set() returns successfully
          if (mAudioTrack != 0) {
              mAudioTrack->asBinder()->unlinkToDeath(mDeathNotifier, this);
              mDeathNotifier.clear();
          }
          //將創建的Track代理對象、匿名共享內存代理對象保存到AudioTrack的成員變量中
          mAudioTrack = track;
          mCblkMemory = iMem;
          //保存匿名共享內存的首地址,在匿名共享內存的頭部存放了一個audio_track_cblk_t對象
          audio_track_cblk_t* cblk = static_cast(iMem->pointer());
          mCblk = cblk;
          size_t temp = cblk->frameCount_;
          if (temp < frameCount || (frameCount == 0 && temp == 0)) {
              // In current design, AudioTrack client checks and ensures frame count validity before
              // passing it to AudioFlinger so AudioFlinger should not return a different value except
              // for fast track as it uses a special method of assigning frame count.
              ALOGW("Requested frameCount %u but received frameCount %u", frameCount, temp);
          }
          frameCount = temp;
          mAwaitBoost = false;
          if (flags & AUDIO_OUTPUT_FLAG_FAST) {
              if (trackFlags & IAudioFlinger::TRACK_FAST) {
                  ALOGV("AUDIO_OUTPUT_FLAG_FAST successful; frameCount %u", frameCount);
                  mAwaitBoost = true;
                  if (sharedBuffer == 0) {
                      // double-buffering is not required for fast tracks, due to tighter scheduling
                      if (mNotificationFramesAct == 0 || mNotificationFramesAct > frameCount) {
                          mNotificationFramesAct = frameCount;
                      }
                  }
              } else {
                  ALOGV("AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount %u", frameCount);
                  // once denied, do not request again if IAudioTrack is re-created
                  flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_FAST);
                  mFlags = flags;
                  if (sharedBuffer == 0) {//stream模式
                      if (mNotificationFramesAct == 0 || mNotificationFramesAct > frameCount/nBuffering) {
                          mNotificationFramesAct = frameCount/nBuffering;
                      }
                  }
              }
          }
          if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
              if (trackFlags & IAudioFlinger::TRACK_OFFLOAD) {
                  ALOGV("AUDIO_OUTPUT_FLAG_OFFLOAD successful");
              } else {
                  ALOGW("AUDIO_OUTPUT_FLAG_OFFLOAD denied by server");
                  flags = (audio_output_flags_t) (flags & ~AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD);
                  mFlags = flags;
                  return NO_INIT;
              }
          }
          mRefreshRemaining = true;
          // Starting address of buffers in shared memory.  If there is a shared buffer, buffers
          // is the value of pointer() for the shared buffer, otherwise buffers points
          // immediately after the control block.  This address is for the mapping within client
          // address space.  AudioFlinger::TrackBase::mBuffer is for the server address space.
          void* buffers;
          if (sharedBuffer == 0) {//stream模式
              buffers = (char*)cblk + sizeof(audio_track_cblk_t);
          } else {
              buffers = sharedBuffer->pointer();
          }
          mAudioTrack->attachAuxEffect(mAuxEffectId);
          // FIXME don't believe this lie
          mLatency = afLatency + (1000*frameCount) / sampleRate;
          mFrameCount = frameCount;
          // If IAudioTrack is re-created, don't let the requested frameCount
          // decrease.  This can confuse clients that cache frameCount().
          if (frameCount > mReqFrameCount) {
              mReqFrameCount = frameCount;
          }
          // update proxy
          if (sharedBuffer == 0) {
              mStaticProxy.clear();
              mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
          } else {
              mStaticProxy = new StaticAudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF);
              mProxy = mStaticProxy;
          }
          mProxy->setVolumeLR((uint32_t(uint16_t(mVolume[RIGHT] * 0x1000)) << 16) |
                  uint16_t(mVolume[LEFT] * 0x1000));
          mProxy->setSendLevel(mSendLevel);
          mProxy->setSampleRate(mSampleRate);
          mProxy->setEpoch(epoch);
          mProxy->setMinimum(mNotificationFramesAct);
          mDeathNotifier = new DeathNotifier(this);
          mAudioTrack->asBinder()->linkToDeath(mDeathNotifier, this);
          return NO_ERROR;
      }
      

      IAudioTrack建立了AudioTrack與AudioFlinger之間的關系,在static模式下,用於存放音頻數據的匿名共享內存在AudioTrack這邊創建,在stream播放模式下,匿名共享內存卻是在AudioFlinger這邊創建。這兩種播放模式下創建的匿名共享內存是有區別的,stream模式下的匿名共享內存頭部會創建一個audio_track_cblk_t對象,用於協調生產者AudioTrack和消費者AudioFlinger之間的步調。createTrack就是在AudioFlinger中創建一個Track對象。

      frameworks\av\services\audioflinger\ AudioFlinger.cpp

      sp AudioFlinger::createTrack(
              audio_stream_type_t streamType,
              uint32_t sampleRate,
              audio_format_t format,
              audio_channel_mask_t channelMask,
              size_t frameCount,
              IAudioFlinger::track_flags_t *flags,
              const sp& sharedBuffer,
              audio_io_handle_t output,
              pid_t tid,
              int *sessionId,
              String8& name,
              int clientUid,
              status_t *status)
      {
          sp track;
          sp trackHandle;
          sp client;
          status_t lStatus;
          int lSessionId;
          // client AudioTrack::set already implements AUDIO_STREAM_DEFAULT => AUDIO_STREAM_MUSIC,
          // but if someone uses binder directly they could bypass that and cause us to crash
          if (uint32_t(streamType) >= AUDIO_STREAM_CNT) {
              ALOGE("createTrack() invalid stream type %d", streamType);
              lStatus = BAD_VALUE;
              goto Exit;
          }
          // client is responsible for conversion of 8-bit PCM to 16-bit PCM,
          // and we don't yet support 8.24 or 32-bit PCM
          if (audio_is_linear_pcm(format) && format != AUDIO_FORMAT_PCM_16_BIT) {
              ALOGE("createTrack() invalid format %d", format);
              lStatus = BAD_VALUE;
              goto Exit;
          }
          {
              Mutex::Autolock _l(mLock);
              //根據播放線程ID號查找出對應的PlaybackThread,在openout時,播放線程以key/value形式保存在AudioFlinger的mPlaybackThreads中
              PlaybackThread *thread = checkPlaybackThread_l(output);
              PlaybackThread *effectThread = NULL;
              if (thread == NULL) {
                  ALOGE("no playback thread found for output handle %d", output);
                  lStatus = BAD_VALUE;
                  goto Exit;
              }
              pid_t pid = IPCThreadState::self()->getCallingPid();
              //根據客戶端進程pid查找是否已經為該客戶進程創建了Client對象,如果沒有,則創建一個Client對象
              client = registerPid_l(pid);
              ALOGV("createTrack() sessionId: %d", (sessionId == NULL) ? -2 : *sessionId);
              if (sessionId != NULL && *sessionId != AUDIO_SESSION_OUTPUT_MIX) {
                  // check if an effect chain with the same session ID is present on another
                  // output thread and move it here.
                  //遍歷所有的播放線程,不包括輸出線程,如果該線程中Track的sessionId與當前相同,則取出該線程作為當前Track的effectThread。
                  for (size_t i = 0; i < mPlaybackThreads.size(); i++) {
                      sp t = mPlaybackThreads.valueAt(i);
                      if (mPlaybackThreads.keyAt(i) != output) {
                          uint32_t sessions = t->hasAudioSession(*sessionId);
                          if (sessions & PlaybackThread::EFFECT_SESSION) {
                              effectThread = t.get();
                              break;
                          }
                      }
                  }
                  lSessionId = *sessionId;
              } else {
                  // if no audio session id is provided, create one here
                  lSessionId = nextUniqueId();
                  if (sessionId != NULL) {
                      *sessionId = lSessionId;
                  }
              }
              ALOGV("createTrack() lSessionId: %d", lSessionId);
              //在找到的PlaybackThread線程中創建Track
              track = thread->createTrack_l(client, streamType, sampleRate, format,
                      channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus);
              // move effect chain to this output thread if an effect on same session was waiting
              // for a track to be created
              if (lStatus == NO_ERROR && effectThread != NULL) {
                  Mutex::Autolock _dl(thread->mLock);
                  Mutex::Autolock _sl(effectThread->mLock);
                  moveEffectChain_l(lSessionId, effectThread, thread, true);
              }
              // Look for sync events awaiting for a session to be used.
              for (int i = 0; i < (int)mPendingSyncEvents.size(); i++) {
                  if (mPendingSyncEvents[i]->triggerSession() == lSessionId) {
                      if (thread->isValidSyncEvent(mPendingSyncEvents[i])) {
                          if (lStatus == NO_ERROR) {
                              (void) track->setSyncEvent(mPendingSyncEvents[i]);
                          } else {
                              mPendingSyncEvents[i]->cancel();
                          }
                          mPendingSyncEvents.removeAt(i);
                          i--;
                      }
                  }
              }
          }
          //此時Track已成功創建,還需要為該Track創建代理對象TrackHandle
          if (lStatus == NO_ERROR) {
              // s for server's pid, n for normal mixer name, f for fast index
              name = String8::format("s:%d;n:%d;f:%d", getpid_cached, track->name() - AudioMixer::TRACK0,track->fastIndex());
              trackHandle = new TrackHandle(track);
          } else {
              // remove local strong reference to Client before deleting the Track so that the Client destructor is called by the TrackBase destructor with mLock held
              client.clear();
              track.clear();
          }
      Exit:
          if (status != NULL) {
              *status = lStatus;
          }
          /**
           * 向客戶進程返回IAudioTrack的代理對象,這樣客戶進程就可以跨進程訪問創建的Track了,
           * 訪問方式如下:BpAudioTrack --> BnAudioTrack --> TrackHandle --> Track
           */
          return trackHandle;
      }
      

      該函數首先以單例模式為應用程序進程創建一個Client對象,直接對話某個客戶進程。然後根據播放線程ID找出對應的PlaybackThread,並將創建Track的任務轉交給它,PlaybackThread完成Track創建後,由於Track沒有通信功能,因此還需要為其創建一個代理通信業務的TrackHandle對象。

      \

      構造Client對象

      根據進程pid,為請求播放音頻的客戶端創建一個Client對象。

      sp AudioFlinger::registerPid_l(pid_t pid)
      {
          // If pid is already in the mClients wp<> map, then use that entry
          // (for which promote() is always != 0), otherwise create a new entry and Client.
          sp client = mClients.valueFor(pid).promote();
          if (client == 0) {
              client = new Client(this, pid);
              mClients.add(pid, client);
          }
          return client;
      }
      

      AudioFlinger的成員變量mClients以鍵值對的形式保存pid和Client對象,這裡首先取出pid對應的Client對象,如果該對象為空,則為客戶端進程創建一個新的Client對象。

      AudioFlinger::Client::Client(const sp& audioFlinger, pid_t pid)
          : RefBase(),mAudioFlinger(audioFlinger),
              // FIXME should be a "k" constant not hard-coded, in .h or ro. property, see 4 lines below
              mMemoryDealer(new MemoryDealer(2*1024*1024, "AudioFlinger::Client")),
              mPid(pid),
              mTimedTrackCount(0)
      {
          // 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
      }
      

      構造Client對象時,創建了一個MemoryDealer對象,該對象用於分配共享內存。

      frameworks\native\libs\binder\ MemoryDealer.cpp

      MemoryDealer::MemoryDealer(size_t size, const char* name)
          : mHeap(new MemoryHeapBase(size, 0, name)),//創建指定大小的共享內存
          mAllocator(new SimpleBestFitAllocator(size))//創建內存分配器
      {    
      }
      

      MemoryDealer是個工具類,用於分配共享內存,每一個Client都擁有一個MemoryDealer對象,這就意味著每個客戶端進程都是在自己獨有的內存空間中分配共享內存。MemoryDealer構造時創建了一個大小為2*1024*1024的匿名共享內存,該客戶進程所有的AudioTrack在AudioFlinger中創建的Track都是在這塊共享內存中分配buffer。

      SimpleBestFitAllocator::SimpleBestFitAllocator(size_t size)
      {
          size_t pagesize = getpagesize();
          mHeapSize = ((size + pagesize-1) & ~(pagesize-1));//頁對齊
          chunk_t* node = new chunk_t(0, mHeapSize / kMemoryAlign);
          mList.insertHead(node);
      }
      

      由此可知,當應用程序進程中的AudioTrack請求AudioFlinger在某個PlaybackThread中創建Track對象時,AudioFlinger首先會為應用程序進程創建一個Client對象,同時創建一塊大小為2M的共享內存。在創建Track時,Track將在2M共享內存中分配buffer用於音頻播放。

      \

      創建Track對象
      sp AudioFlinger::PlaybackThread::createTrack_l(
              const sp& client,
              audio_stream_type_t streamType,
              uint32_t sampleRate,
              audio_format_t format,
              audio_channel_mask_t channelMask,
              size_t frameCount,
              const sp& sharedBuffer,
              int sessionId,
              IAudioFlinger::track_flags_t *flags,
              pid_t tid,
              int uid,
              status_t *status)
      {
          sp track;
          status_t lStatus;
          bool isTimed = (*flags & IAudioFlinger::TRACK_TIMED) != 0;
          // client expresses a preference for FAST, but we get the final say
          if (*flags & IAudioFlinger::TRACK_FAST) {
            if (
                  // not timed
                  (!isTimed) &&
                  // either of these use cases:
                  (
                    // use case 1: shared buffer with any frame count
                    (
                      (sharedBuffer != 0)
                    ) ||
                    // use case 2: callback handler and frame count is default or at least as large as HAL
                    (
                      (tid != -1) &&
                      ((frameCount == 0) ||
                      (frameCount >= (mFrameCount * kFastTrackMultiplier)))
                    )
                  ) &&
                  // PCM data
                  audio_is_linear_pcm(format) &&
                  // mono or stereo
                  ( (channelMask == AUDIO_CHANNEL_OUT_MONO) ||
                    (channelMask == AUDIO_CHANNEL_OUT_STEREO) ) &&
      #ifndef FAST_TRACKS_AT_NON_NATIVE_SAMPLE_RATE
                  // hardware sample rate
                  (sampleRate == mSampleRate) &&
      #endif
                  // normal mixer has an associated fast mixer
                  hasFastMixer() &&
                  // there are sufficient fast track slots available
                  (mFastTrackAvailMask != 0)
                  // FIXME test that MixerThread for this fast track has a capable output HAL
                  // FIXME add a permission test also?
              ) {
              // if frameCount not specified, then it defaults to fast mixer (HAL) frame count
              if (frameCount == 0) {
                  frameCount = mFrameCount * kFastTrackMultiplier;
              }
              ALOGV("AUDIO_OUTPUT_FLAG_FAST accepted: frameCount=%d mFrameCount=%d",
                      frameCount, mFrameCount);
            } else {
              ALOGV("AUDIO_OUTPUT_FLAG_FAST denied: isTimed=%d sharedBuffer=%p frameCount=%d "
                      "mFrameCount=%d format=%d isLinear=%d channelMask=%#x sampleRate=%u mSampleRate=%u ""hasFastMixer=%d tid=%d fastTrackAvailMask=%#x",
                      isTimed, sharedBuffer.get(), frameCount, mFrameCount, format,
                      audio_is_linear_pcm(format),
                      channelMask, sampleRate, mSampleRate, hasFastMixer(), tid, mFastTrackAvailMask);
              *flags &= ~IAudioFlinger::TRACK_FAST;
              // For compatibility with AudioTrack calculation, buffer depth is forced
              // to be at least 2 x the normal mixer frame count and cover audio hardware latency.
              // This is probably too conservative, but legacy application code may depend on it.
              // If you change this calculation, also review the start threshold which is related.
              uint32_t latencyMs = mOutput->stream->get_latency(mOutput->stream);
              uint32_t minBufCount = latencyMs / ((1000 * mNormalFrameCount) / mSampleRate);
              if (minBufCount < 2) {
                  minBufCount = 2;
              }
              size_t minFrameCount = mNormalFrameCount * minBufCount;
              if (frameCount < minFrameCount) {
                  frameCount = minFrameCount;
              }
            }
          }
          if (mType == DIRECT) {
              if ((format & AUDIO_FORMAT_MAIN_MASK) == AUDIO_FORMAT_PCM) {
                  if (sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
                      ALOGE("createTrack_l() Bad parameter: sampleRate %u format %d, channelMask 0x%08x ""for output %p with format %d",sampleRate, format, channelMask, mOutput, mFormat);
                      lStatus = BAD_VALUE;
                      goto Exit;
                  }
              }
          } else if (mType == OFFLOAD) {
              if (sampleRate != mSampleRate || format != mFormat || channelMask != mChannelMask) {
                  ALOGE("createTrack_l() Bad parameter: sampleRate %d format %d, channelMask 0x%08x \"""for output %p with format %d",sampleRate, format, channelMask, mOutput, mFormat);
                  lStatus = BAD_VALUE;
                  goto Exit;
              }
          } else {
              if ((format & AUDIO_FORMAT_MAIN_MASK) != AUDIO_FORMAT_PCM) {
                      ALOGE("createTrack_l() Bad parameter: format %d \""
                              "for output %p with format %d",format, mOutput, mFormat);
                      lStatus = BAD_VALUE;
                      goto Exit;
              }
              // Resampler implementation limits input sampling rate to 2 x output sampling rate.
              if (sampleRate > mSampleRate*2) {
                  ALOGE("Sample rate out of range: %u mSampleRate %u", sampleRate, mSampleRate);
                  lStatus = BAD_VALUE;
                  goto Exit;
              }
          }
          lStatus = initCheck();
          if (lStatus != NO_ERROR) {
              ALOGE("Audio driver not initialized.");
              goto Exit;
          }
          { // scope for mLock
              Mutex::Autolock _l(mLock);
              ALOGD("ceateTrack_l() got lock"); // SPRD: Add some log
              // all tracks in same audio session must share the same routing strategy otherwise
              // conflicts will happen when tracks are moved from one output to another by audio policy
              // manager
              uint32_t strategy = AudioSystem::getStrategyForStream(streamType);
              for (size_t i = 0; i < mTracks.size(); ++i) {
                  sp t = mTracks[i];
                  if (t != 0 && !t->isOutputTrack()) {
                      uint32_t actual = AudioSystem::getStrategyForStream(t->streamType());
                      if (sessionId == t->sessionId() && strategy != actual) {
                          ALOGE("createTrack_l() mismatched strategy; expected %u but found %u",
                                  strategy, actual);
                          lStatus = BAD_VALUE;
                          goto Exit;
                      }
                  }
              }
              if (!isTimed) {
                  track = new Track(this, client, streamType, sampleRate, format,
                          channelMask, frameCount, sharedBuffer, sessionId, uid, *flags);
              } else {
                  track = TimedTrack::create(this, client, streamType, sampleRate, format,
                          channelMask, frameCount, sharedBuffer, sessionId, uid);
              }
              if (track == 0 || track->getCblk() == NULL || track->name() < 0) {
                  lStatus = NO_MEMORY;
                  goto Exit;
              }
              mTracks.add(track);
              sp chain = getEffectChain_l(sessionId);
              if (chain != 0) {
                  ALOGV("createTrack_l() setting main buffer %p", chain->inBuffer());
                  track->setMainBuffer(chain->inBuffer());
                  chain->setStrategy(AudioSystem::getStrategyForStream(track->streamType()));
                  chain->incTrackCnt();
              }
              if ((*flags & IAudioFlinger::TRACK_FAST) && (tid != -1)) {
                  pid_t callingPid = IPCThreadState::self()->getCallingPid();
                  // we don't have CAP_SYS_NICE, nor do we want to have it as it's too powerful,
                  // so ask activity manager to do this on our behalf
                  sendPrioConfigEvent_l(callingPid, tid, kPriorityAudioApp);
              }
          }
          lStatus = NO_ERROR;
      Exit:
          if (status) {
              *status = lStatus;
          }
          return track;
      }
      

      這裡就為AudioTrack創建了一個Track對象。Track繼承於TrackBase,因此構造Track時,首先執行TrackBase的構造函數。

      \

      AudioFlinger::ThreadBase::TrackBase::TrackBase(
                  ThreadBase *thread,//所屬的播放線程
                  const sp& client,//所屬的Client
                  uint32_t sampleRate,//采樣率
                  audio_format_t format,//音頻格式
                  audio_channel_mask_t channelMask,//聲道
                  size_t frameCount,//音頻幀個數
                  const sp& sharedBuffer,//共享內存
                  int sessionId,
                  int clientUid,
                  bool isOut)
          :   RefBase(),
              mThread(thread),
              mClient(client),
              mCblk(NULL),
              // mBuffer
              mState(IDLE),
              mSampleRate(sampleRate),
              mFormat(format),
              mChannelMask(channelMask),
              mChannelCount(popcount(channelMask)),
              mFrameSize(audio_is_linear_pcm(format) ?
                      mChannelCount * audio_bytes_per_sample(format) : sizeof(int8_t)),
              mFrameCount(frameCount),
              mSessionId(sessionId),
              mIsOut(isOut),
              mServerProxy(NULL),
              mId(android_atomic_inc(&nextTrackId)),
              mTerminated(false)
      {
          // if the caller is us, trust the specified uid
          if (IPCThreadState::self()->getCallingPid() != getpid_cached || clientUid == -1) {
              int newclientUid = IPCThreadState::self()->getCallingUid();
              if (clientUid != -1 && clientUid != newclientUid) {
                  ALOGW("uid %d tried to pass itself off as %d", newclientUid, clientUid);
              }
              clientUid = newclientUid;
          }
          // clientUid contains the uid of the app that is responsible for this track, so we can blame
          //得到應用進程uid
          mUid = clientUid;
          // client == 0 implies sharedBuffer == 0
          ALOG_ASSERT(!(client == 0 && sharedBuffer != 0));
          ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
                  sharedBuffer->size());
          //計算audio_track_cblk_t大小
          size_t size = sizeof(audio_track_cblk_t);
          //計算存放音頻數據的buffer大小,= frameCount*mFrameSize
          size_t bufferSize = (sharedBuffer == 0 ? roundup(frameCount) : frameCount) * mFrameSize;
          /**
           * stream模式下,需要audio_track_cblk_t來協調生成者和消費者,計算共享內存大小   
           *  --------------------------------------------------------
           * | audio_track_cblk_t |               buffer                   |
           *  --------------------------------------------------------
           */
          if (sharedBuffer == 0) {//stream模式下
              size += bufferSize;
          }
          //如果Client不為空,就通過Client來分配buffer
          if (client != 0) {
          	//請求Client中的MemoryDealer工具類來分配buffer
              mCblkMemory = client->heap()->allocate(size);
              //分配成功
              if (mCblkMemory != 0) {
              	//將共享內存的指針強制轉換為audio_track_cblk_t
                  mCblk = static_cast(mCblkMemory->pointer());
                  // can't assume mCblk != NULL
              } else {
                  ALOGE("not enough memory for AudioTrack size=%u", size);
                  client->heap()->dump("AudioTrack");
                  return;
              }
          } else {//Client為空,使用數組方式分配內存空間
              // this syntax avoids calling the audio_track_cblk_t constructor twice
              mCblk = (audio_track_cblk_t *) new uint8_t[size];
              // assume mCblk != NULL
          }
          /**
           * 當為應用進程創建了Client對象,則通過Client來分配音頻數據buffer,否則通過數組方式分配buffer。  
           * stream模式下,在分配好的buffer頭部創建audio_track_cblk_t對象,而static模式下,創建單獨的
           * audio_track_cblk_t對象。
           */
      if (mCblk != NULL) {
      	  // construct the shared structure in-place.
              new(mCblk) audio_track_cblk_t();
              // clear all buffers
              mCblk->frameCount_ = frameCount;
              if (sharedBuffer == 0) {//stream模式
              	//將mBuffer指向數據buffer的首地址
                  mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t);
                  //清空數據buffer
                  memset(mBuffer, 0, bufferSize);
               } else {//static模式
                  mBuffer = sharedBuffer->pointer();
       #if 0
                  mCblk->mFlags = CBLK_FORCEREADY;    // FIXME hack, need to fix the track ready logic
      #endif
              }
      #ifdef TEE_SINK
      	…
      #endif
              ALOGD("TrackBase constructed"); // SPRD: add some log
          }
      }
      
      

      TrackBase構造過程主要是為音頻播放分配共享內存,在static模式下,共享內存由應用進程自身分配,但在stream模式,共享內存由AudioFlinger分配,static和stream模式下,都會創建audio_track_cblk_t對象,唯一的區別在於,在stream模式下,audio_track_cblk_t對象創建在共享內存的頭部。

      static模式:

      \

      stream模式:

      \

      接下來繼續分析Track的構造過程:

      AudioFlinger::PlaybackThread::Track::Track(
                  PlaybackThread *thread, //所屬的播放線程
                  const sp& client, //所屬的Client
                  audio_stream_type_t streamType,//音頻流類型
                  uint32_t sampleRate, //采樣率
                  audio_format_t format, //音頻格式
                  audio_channel_mask_t channelMask, //聲道
                  size_t frameCount, //音頻幀個數
                  const sp& sharedBuffer, //共享內存
                  int sessionId,
                  int uid,
                  IAudioFlinger::track_flags_t flags)
          :   TrackBase(thread, client, sampleRate, format, channelMask, frameCount, sharedBuffer,sessionId, uid, true /*isOut*/),
          mFillingUpStatus(FS_INVALID),
          // mRetryCount initialized later when needed
          mSharedBuffer(sharedBuffer),
          mStreamType(streamType),
          mName(-1),  // see note below
          mMainBuffer(thread->mixBuffer()),
          mAuxBuffer(NULL),
          mAuxEffectId(0), mHasVolumeController(false),
          mPresentationCompleteFrames(0),
          mFlags(flags),
          mFastIndex(-1),
          mCachedVolume(1.0),
          mIsInvalid(false),
          mAudioTrackServerProxy(NULL),
          mResumeToStopping(false)
      {
          if (mCblk != NULL) {//audio_track_cblk_t對象不為空
              if (sharedBuffer == 0) {//stream模式
                  mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
                          mFrameSize);
              } else {//static模式
                  mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,mFrameSize);
              }
              mServerProxy = mAudioTrackServerProxy;
              // to avoid leaking a track name, do not allocate one unless there is an mCblk
              mName = thread->getTrackName_l(channelMask, sessionId);
              if (mName < 0) {
                  ALOGE("no more track names available");
                  return;
              }
              // only allocate a fast track index if we were able to allocate a normal track name
              if (flags & IAudioFlinger::TRACK_FAST) {
                  mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
                  ALOG_ASSERT(thread->mFastTrackAvailMask != 0);
                  int i = __builtin_ctz(thread->mFastTrackAvailMask);
                  ALOG_ASSERT(0 < i && i < (int)FastMixerState::kMaxFastTracks);
                  // FIXME This is too eager.  We allocate a fast track index before the
                  //       fast track becomes active.  Since fast tracks are a scarce resource,
                  //       this means we are potentially denying other more important fast tracks 
                  //       from being created.  It would be better to allocate the index dynamically.
                  mFastIndex = i;
                  // Read the initial underruns because this field is never cleared by the fast mixer
                  mObservedUnderruns = thread->getFastTrackUnderruns(i);
                  thread->mFastTrackAvailMask &= ~(1 << i);
              }
          }
          ALOGV("Track constructor name %d, calling pid %d", mName,
                  IPCThreadState::self()->getCallingPid());
      }
      

      在TrackBase的構造過程中根據是否創建Client對象來采取不同方式分配audio_track_cblk_t對象內存空間,並且創建audio_track_cblk_t對象。在Track構造中,根據不同的播放模式,創建不同的代理對象:

        Stream模式下,創建AudioTrackServerProxy代理對象;Static模式下,創建StaticAudioTrackServerProxy代理對象;

        \

        在stream模式下,同時分配指定大小的音頻數據buffer ,該buffer的結構如下所示:

        \

        我們知道在構造Client對象時,創建了一個內存分配工具對象MemoryDealer,同時創建了一塊大小為2M的匿名共享內存,這裡就是使用MemoryDealer對象在這塊匿名共享內存上分配指定大小的buffer。

        frameworks\native\libs\binder\MemoryDealer.cpp

        sp MemoryDealer::allocate(size_t size)
        {
        sp memory;
            //分配size大小的共享內存,並返回該buffer的偏移量
            const ssize_t offset = allocator()->allocate(size);
        if (offset >= 0) {
                //將分配的buffer封裝為Allocation對象
                memory = new Allocation(this, heap(), offset, size);
            }
            return memory;
        }
        
        size_t SimpleBestFitAllocator::allocate(size_t size, uint32_t flags)
        {
            Mutex::Autolock _l(mLock);
            ssize_t offset = alloc(size, flags);
            return offset;
        }

        ssize_t SimpleBestFitAllocator::alloc(size_t size, uint32_t flags)
        {
            if (size == 0) {
                return 0;
            }
            size = (size + kMemoryAlign-1) / kMemoryAlign;
            chunk_t* free_chunk = 0;
            chunk_t* cur = mList.head();
            size_t pagesize = getpagesize();
            while (cur) {
                int extra = 0;
                if (flags & PAGE_ALIGNED)
                    extra = ( -cur->start & ((pagesize/kMemoryAlign)-1) ) ;
                // best fit
                if (cur->free && (cur->size >= (size+extra))) {
                    if ((!free_chunk) || (cur->size < free_chunk->size)) {
                        free_chunk = cur;
                    }
                    if (cur->size == size) {
                        break;
                    }
                }
                cur = cur->next;
            }
            if (free_chunk) {
                const size_t free_size = free_chunk->size;
                free_chunk->free = 0;
                free_chunk->size = size;
                if (free_size > size) {
                    int extra = 0;
                    if (flags & PAGE_ALIGNED)
                        extra = ( -free_chunk->start & ((pagesize/kMemoryAlign)-1) ) ;
                    if (extra) {
                        chunk_t* split = new chunk_t(free_chunk->start, extra);
                        free_chunk->start += extra;
                        mList.insertBefore(free_chunk, split);
                    }
                    ALOGE_IF((flags&PAGE_ALIGNED) && 
                            ((free_chunk->start*kMemoryAlign)&(pagesize-1)),
                            "PAGE_ALIGNED requested, but page is not aligned!!!");
                    const ssize_t tail_free = free_size - (size+extra);
                    if (tail_free > 0) {
                        chunk_t* split = new chunk_t(
                                free_chunk->start + free_chunk->size, tail_free);
                        mList.insertAfter(free_chunk, split);
                    }
                }
                return (free_chunk->start)*kMemoryAlign;
            }
            return NO_MEMORY;
        }
        

        audio_track_cblk_t對象用於協調生產者AudioTrack和消費者AudioFlinger之間的步調。

        \

        在createTrack時由AudioFlinger申請相應的內存,然後通過IMemory接口返回AudioTrack,這樣AudioTrack和AudioFlinger管理著同一個audio_track_cblk_t,通過它實現了環形FIFO,AudioTrack向FIFO中寫入音頻數據,AudioFlinger從FIFO中讀取音頻數據,經Mixer後送給AudioHardware進行播放。

        1) AudioTrack是FIFO的數據生產者;

        2) AudioFlinger是FIFO的數據消費者;

        構造TrackHandle

        Track對象只負責音頻相關業務,對外並沒有提供誇進程的Binder調用接口,因此需要將通信業務委托給另外一個對象來完成,這就是TrackHandle存在的意義,TrackHandle負責代理Track的通信業務,它是Track與AudioTrack之間的跨進程通道。

        AudioFlinger::TrackHandle::TrackHandle(const sp& track): BnAudioTrack(),mTrack(track)
        {
        }
        

        \

        AudioFlinger擁有多個工作線程,每個線程擁有多個Track。播放線程實際上是MixerThread的實例,MixerThread的threadLoop()中,會把該線程中的各個Track進行混合,必要時還要進行ReSample(重采樣)的動作,轉換為統一的采樣率(44.1K),然後通過音頻系統的AudioHardware層輸出音頻數據。

        \

        ? Framework或者Java層通過JNI創建AudioTrack對象;

        ? 根據StreamType等參數,查找已打開的音頻輸出設備,如果查找不到匹配的音頻輸出設備,則請求AudioFlinger打開新的音頻輸出設備;

        ? AudioFlinger為該輸出設備創建混音線程MixerThread,並把該線程的id作為getOutput()的返回值返回給AudioTrack;

        ? AudioTrack通過binder機制調用AudioFlinger的createTrack()創建Track,並且創建TrackHandle Binder本地對象,同時返回IAudioTrack的代理對象。

        ? AudioFlinger注冊該Track到MixerThread中;

        ? AudioTrack通過IAudioTrack接口,得到在AudioFlinger中創建的FIFO(audio_track_cblk_t);



        AudioTrack啟動過程


        AudioTrack數據寫入過程


        AudioTrack停止過程


  1. 上一頁:
  2. 下一頁:
熱門文章
閱讀排行版
Copyright © Android教程網 All Rights Reserved