Android4.2.2多媒体架构MediaPlay的创建过程分析(一)
本文均属自己阅读源码的点滴总结,转账请注明出处谢谢。
欢迎和大家交流。qq:1037701636 email:[email protected]
Android源码版本Version:4.2.2; 硬件平台 全志A31
前沿:
回首往事,记得2012年的时候,那时还年少不知,就研究过android的多媒体框架,那时还是2.3的源码,看过stagefright的源码,记得当时是特别的痛苦。而今,再次看起这个多媒体模块的代码,突然间觉得豁然开朗,模块间的层次清晰,有据可依,遇到的疑问往往都能迎刃而解。我想,也许这就是2年多来的进步与经验吧。感谢时间,让我学会了成才。
邓凡平老师告诉我:无论何时进入互联网都不算迟,的确站在巨人的肩膀上再次重新梳理旧的东西,还是能学习的更多更深。
接下去的一段时间,打算主攻android多媒体框架、camera架构、surfaceflinger等FrameWork层,HAL层的模块以及相关的Android系统定制与类平板的开发,更底层是linux内核中的视频采集与显示驱动等来作为接下去寻找工作的重中之重。自我感觉在移动互联以及嵌入式的世界里,这几个方面现在应该还都是不可欠缺的。
Android的多媒体框架熟悉的人很熟悉,像我等菜鸟,就只能慢慢的啃了。这里以4.2.2的源码为背景,记录下我所熟悉的多媒体框架的核心模块,以便以后使用。
多媒体框架在android中的功能主要体现杂音视频的播放以及录制等,前者对应着解码,后者对应着编码。Android中以一个MediaPlay类作为音视频播放的基础类,围绕着他开展了一系列的处理。学习一个新的模块,最简单的步骤就是找到一个典型的应用程序,通过它的实现,来分析整个模块的数据流和控制流。如SurfaceFlinger的研究可以以Bootanmation的启动来学习。典型的MediaPlay在Java处的接口包括视频播放类VideoView以及音频专用MediaPlay类。
1.APP闪的VideoView类,其实质是用MediaPlay类来实现的,只是由于其是视频播放,不得不和surfaceview挂上够,才将其独立出来。使得其有如下的结构:
public class VideoView extends SurfaceView implements MediaPlayerControl { private String TAG = "VideoView"; // settable by the client private Uri mUri; private Map<String, String> mHeaders;
在APP中,VideoView的典型简单使用如下:
video = (VideoView) findViewById(R.id.videoView1); mediacontroller =new MediaController(this); video.setVideoPath(Video_fd); mediacontroller.setAnchorView(video); //控件和视频绑定 video.setMediaController(mediacontroller); //设置视频播放的控制器 video.start();
通过setVideoPath的API处理,依次进行如下的调用:
public void setVideoPath(String path) { setVideoURI(Uri.parse(path)); } public void setVideoURI(Uri uri) { setVideoURI(uri, null); } /** * @hide */ public void setVideoURI(Uri uri, Map<String, String> headers) { mUri = uri; mHeaders = headers; mSeekWhenPrepared = 0; openVideo(); requestLayout(); invalidate(); }
openVideo的处理,让最终的处理权交给了MediaPlayer。
private void openVideo() { if (mUri == null || mSurfaceHolder == null) { // not ready for playback just yet, will try again later return; } // Tell the music playback service to pause // TODO: these constants need to be published somewhere in the framework. Intent i = new Intent("com.android.music.musicservicecommand"); i.putExtra("command", "pause"); mContext.sendBroadcast(i); // we shouldn‘t clear the target state, because somebody might have // called start() previously release(false); try { mMediaPlayer = new MediaPlayer(); ...... */ mMediaPlayer.setDataSource(mContext, mUri, mHeaders); ......... }
上述的两个架构,在音频播放的APP调用更加紧密,如下所示:
mediaplayer = new MediaPlayer(); mediaplayer.setDataSource(Music_fd); //设置要播放的音频文件 mediaplayer.prepare(); mediaplayer.seekTo(0);
到这里基本的音视频框架在APP中的调用就基本完成了。
2.进入MediaPlay的世界
2.1 首先关注MediaPlay的对象创建过程,这也是分析android源码的一个基本要求。依次通过java,JNI(libmedia_jni.so)进入Framework(libmedia.so)的处理流程。
new VideoView——> new MediaPlay ——>native_setup:典型的一个对象的建立,并传统到JNI。native_setup主要用于本地C++层的对象的建立
进入JNI做android_media_MediaPlayer_native_setup处理,使得最终进入C++的世界。
android_media_MediaPlayer_native_setup(JNIEnv *env, jobject thiz, jobject weak_this) { ALOGV("native_setup"); sp<MediaPlayer> mp = new MediaPlayer(); if (mp == NULL) { jniThrowException(env, "java/lang/RuntimeException", "Out of memory"); return; } // create new listener and give it to MediaPlayer sp<JNIMediaPlayerListener> listener = new JNIMediaPlayerListener(env, thiz, weak_this); mp->setListener(listener); // Stow our new C++ MediaPlayer in an opaque field in the Java object. setMediaPlayer(env, thiz, mp); }
好了,到此为止,真正的建立了一个所谓native处的MediaPlayer对象。当然java处也有这个对象类。
2.2 setDataSource
MediaPlay的C++代码位于/home/A31_Android4.2.2/android/frameworks/av/media/libmedia下形成一个libmedia.so。
下面来看这个API的处理,接下去都只分析FW层的C++的处理流,java的流和上面的分析类似。
status_t MediaPlayer::setDataSource(int fd, int64_t offset, int64_t length) { ALOGV("setDataSource(%d, %lld, %lld)", fd, offset, length); status_t err = UNKNOWN_ERROR; const sp<IMediaPlayerService>& service(getMediaPlayerService()); if (service != 0) { sp<IMediaPlayer> player(service->create(getpid(), this, mAudioSessionId));//返回一个Bpmediaplayer if ((NO_ERROR != doSetRetransmitEndpoint(player)) || (NO_ERROR != player->setDataSource(fd, offset, length))) {//设置视频源 player.clear(); } err = attachNewPlayer(player); } return err; }
典型的Binder C/S架构,获取MediaPlayerService(MPS)的proxy,提交给MPS处理。
3. MediaPlayerService的工作。
MPS和千万万的Service一样,以一个服务者的身份存在,他是作为分析Binder驱动架构和原理的一个典型代表。在mediaserver中启动,和其他CameraService和AudioFlinger做为多媒体服务。
int main(int argc, char** argv) { signal(SIGPIPE, SIG_IGN); sp<ProcessState> proc(ProcessState::self()); sp<IServiceManager> sm = defaultServiceManager(); ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate();//多媒体服务的启动包括音频,摄像头等 MediaPlayerService::instantiate(); CameraService::instantiate(); AudioPolicyService::instantiate(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); }
3.1 处理create请求:
sp<IMediaPlayer> MediaPlayerService::create(pid_t pid, const sp<IMediaPlayerClient>& client, int audioSessionId)//创建一个mediaplayer,范范一个Client { ALOGV("MediaPlayerService::create"); int32_t connId = android_atomic_inc(&mNextConnId); sp<Client> c = new Client( this, pid, connId, client, audioSessionId, IPCThreadState::self()->getCallingUid());//内部类创建,实现BnMediaPlayer ..... }
创建一个MPS的内部客户端类Client(继承于Binder本地接口类BnMediaPlay),这个看上去和CameraService很相似。有了这个本地客户端类,那么应用端的MediaPlay后续只需要和Client交互即可,而这其中是匿名的Binder在起作用。
3.2 player->setDataSource()
player是调用MPS后返回的一个BpBinder派生类,最终调用MPS的内部类Client的setDataSource()来实现。
status_t MediaPlayerService::Client::setDataSource(int fd, int64_t offset, int64_t length) { ......... player_type playerType = MediaPlayerFactory::getPlayerType(this, fd, offset, length, true );//根据视频源获取要使用的播放器的类型 ........ sp<MediaPlayerBase> p = setDataSource_pre(playerType); // now set data source setDataSource_post(p, p->setDataSource(fd, offset, length)); return mStatus; }
这里面出现了一个MediaPlayerFactory,姑且理解为播放器厂商类吧。通过它来获取当前传入的视频源的视频源后缀格式等:如mp4,avi,3gp等。最终假设当前返回的playerType为STAGEFRIGHT_PLAYER。接着分析setDataSource_pre函数:
sp<MediaPlayerBase> MediaPlayerService::Client::setDataSource_pre( player_type playerType) { // create the right type of player sp<MediaPlayerBase> p = createPlayer(playerType);//创建一个播放器 if (p == NULL) { return p; } if (!p->hardwareOutput()) { mAudioOutput = new AudioOutput(mAudioSessionId); static_cast<MediaPlayerInterface*>(p.get())->setAudioSink(mAudioOutput);//强制转化为MediaPlayerInterface } return p; }
3.3 真正的创建一个适合于当前视频文件播放需要的Player:createPlayer。
sp<MediaPlayerBase> MediaPlayerService::Client::createPlayer(player_type playerType) { // determine if we have the right player type sp<MediaPlayerBase> p = mPlayer; if ((p != NULL) && (p->playerType() != playerType)) { ALOGV("delete player"); p.clear(); } if (p == NULL) { p = MediaPlayerFactory::createPlayer(playerType, this, notify);//新建一个player } if (p != NULL) { p->setUID(mUID); } return p; }
第一次处理时,mPlayer肯定为空,故可以看到最终还是回到了这个厂商播放器类来实现,因为这个类维护着当前平台支持的播放器类型(说到低就是解码器的种类).
sp<MediaPlayerBase> MediaPlayerFactory::createPlayer( player_type playerType, void* cookie, notify_callback_f notifyFunc) { sp<MediaPlayerBase> p; IFactory* factory; status_t init_result; Mutex::Autolock lock_(&sLock); if (sFactoryMap.indexOfKey(playerType) < 0) { ALOGE("Failed to create player object of type %d, no registered" " factory", playerType); return p; } factory = sFactoryMap.valueFor(playerType);//根据type类型获取一个StagefrightPlayerFactory CHECK(NULL != factory); p = factory->createPlayer();//调用创建一个真正的palyer StagefrightPlayerPlay if (p == NULL) { ALOGE("Failed to create player object of type %d, create failed", playerType); return p; } init_result = p->initCheck(); if (init_result == NO_ERROR) { p->setNotifyCallback(cookie, notifyFunc); } else { ALOGE("Failed to create player object of type %d, initCheck failed" " (res = %d)", playerType, init_result); p.clear(); } return p; }
这里出现了一个全局变量SFactoryMap变量:
MediaPlayerFactory::tFactoryMap sFactoryMap;//实际的类型是typedef KeyedVector<player_type, IFactory*> tFactoryMap;KeyedVector是一个向量类,类似于数组,通过一个index进行索引查找。他代表着当前支持的播放器列表。那么这个表的是在何处被初始化呢,我们需要回到MediaPlyerService的构造函数之中。
3.4 系统支持的播放器类型相关信息的注册:
MediaPlayerService::MediaPlayerService() { ALOGV("MediaPlayerService created"); mNextConnId = 1; mBatteryAudio.refCount = 0; for (int i = 0; i < NUM_AUDIO_DEVICES; i++) { mBatteryAudio.deviceOn[i] = 0; mBatteryAudio.lastTime[i] = 0; mBatteryAudio.totalTime[i] = 0; } // speaker is on by default mBatteryAudio.deviceOn[SPEAKER] = 1; MediaPlayerFactory::registerBuiltinFactories();//注册建立厂商的play
void MediaPlayerFactory::registerBuiltinFactories() { Mutex::Autolock lock_(&sLock); if (sInitComplete) return; registerFactory_l(new CedarXPlayerFactory(), CEDARX_PLAYER); registerFactory_l(new CedarAPlayerFactory(), CEDARA_PLAYER); registerFactory_l(new TPlayerFactory(), THUMBNAIL_PLAYER); registerFactory_l(new StagefrightPlayerFactory(), STAGEFRIGHT_PLAYER); registerFactory_l(new NuPlayerFactory(), NU_PLAYER); registerFactory_l(new SonivoxPlayerFactory(), SONIVOX_PLAYER); registerFactory_l(new TestPlayerFactory(), TEST_PLAYER);//不同的播放器注册 sInitComplete = true; }
这里以我们要举例的STAGEFRIGHT_PLAYER为例,进行分析:
a.new StagefrightPlayerFactory()新建一个播放器类,该类的结构如下:
class StagefrightPlayerFactory : public MediaPlayerFactory::IFactory { public: virtual float scoreFactory(const sp<IMediaPlayer>& client, int fd, int64_t offset, int64_t length, float curScore) { char buf[20]; lseek(fd, offset, SEEK_SET); read(fd, buf, sizeof(buf)); lseek(fd, offset, SEEK_SET); long ident = *((long*)buf); // Ogg vorbis? if (ident == 0x5367674f) // ‘OggS‘ return 1.0; return 0.0; } virtual sp<MediaPlayerBase> createPlayer() { ALOGV(" create StagefrightPlayer"); return new StagefrightPlayer();//新建一个StagefrightPlayer } };
很明显,该类的特点是继承并实现了IFactory这个接口类的相关功能。
b. 将新建的这个播放器对象进行注册,依次添加索引值:播放器类型type,并将其对应的factory保存到sFactorymap这种向量表中。
status_t MediaPlayerFactory::registerFactory_l(IFactory* factory, player_type type) { if (NULL == factory) { ALOGE("Failed to register MediaPlayerFactory of type %d, factory is" " NULL.", type); return BAD_VALUE; } if (sFactoryMap.indexOfKey(type) >= 0) { ALOGE("Failed to register MediaPlayerFactory of type %d, type is" " already registered.", type); return ALREADY_EXISTS; } if (sFactoryMap.add(type, factory) < 0) { ALOGE("Failed to register MediaPlayerFactory of type %d, failed to add" " to map.", type); return UNKNOWN_ERROR; } return OK; }
我们回到3.3的程序中区,通过factory = sFactoryMap.valueFor(playerType);//根据type类型获取一个StagefrightPlayerFactory,即之前注册的factory对象。实际是调用他的虚函数create_player()来实现:
virtual sp<MediaPlayerBase> createPlayer() { ALOGV(" create StagefrightPlayer"); return new StagefrightPlayer();//新建一个StagefrightPlayer }
接下去我们看到的将是真正进入StageFright的实现流程:
StagefrightPlayer::StagefrightPlayer() : mPlayer(new AwesomePlayer) {//新建一个AwesomePlayer类,该结构体类属于Stagefright ALOGV("StagefrightPlayer"); mPlayer->setListener(this);//注册StagefrightPlayer到AwesomePlayer类 }
3.4 AwesimePlayer打入stagefright内部
//////////////////////////////////////////////////////////////////////////////// AwesomePlayer::AwesomePlayer() : mQueueStarted(false), mUIDValid(false), mTimeSource(NULL), mVideoRenderingStarted(false), mVideoRendererIsPreview(false), mAudioPlayer(NULL), mDisplayWidth(0), mDisplayHeight(0), mVideoScalingMode(NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW), mFlags(0), mExtractorFlags(0), mVideoBuffer(NULL), mDecryptHandle(NULL), mLastVideoTimeUs(-1), mTextDriver(NULL) { CHECK_EQ(mClient.connect(), (status_t)OK);//OMXClient,connect后维护一个mOMX:BpOMX DataSource::RegisterDefaultSniffers(); mVideoEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoEvent);//注册onVideoEvent事件 mVideoEventPending = false; mStreamDoneEvent = new AwesomeEvent(this, &AwesomePlayer::onStreamDone);//注册onStreamDone事件 mStreamDoneEventPending = false; mBufferingEvent = new AwesomeEvent(this, &AwesomePlayer::onBufferingUpdate);//注册onBufferingUpdate mBufferingEventPending = false; mVideoLagEvent = new AwesomeEvent(this, &AwesomePlayer::onVideoLagUpdate); mVideoEventPending = false; mCheckAudioStatusEvent = new AwesomeEvent( this, &AwesomePlayer::onCheckAudioStatus); mAudioStatusEventPending = false; reset(); }
Awesomeplay的构造函数,主要过程是建立了几个事件处理的注册,具体的event处理机制在下一文中分享
郑重声明:本站内容如果来自互联网及其他传播媒体,其版权均属原媒体及文章作者所有。转载目的在于传递更多信息及用于网络分享,并不代表本站赞同其观点和对其真实性负责,也不构成任何其他建议。