wt是什么意思| 手抖什么原因| 9999是什么意思| 梦见自己生小孩是什么征兆| 牙酸是什么原因| 菠萝有什么功效和作用| 植脂末是什么东西| 福瑞祥和是什么意思| 拔罐后需要注意什么| 过敏吃什么药| 什么马没有腿| 肺部纤维灶什么意思| 梦见自己的衣服丢了是什么意思| 血友病是什么意思| 老是流眼泪是什么原因| 舌头起泡什么原因| 面部抽搐是什么原因| 细菌性前列腺炎有什么症状| 现在领结婚证需要什么| 紧锣密鼓是什么意思| hold on什么意思| 脸上长痤疮用什么药| 疳积是什么| 胸口闷闷的有点疼是什么原因| 什么含钾最多| 胃不好吃什么水果| 什么的小院| 山青读什么| 五岳是什么意思| 洛阳有什么大学| 吃稀饭配什么菜好吃| 前轮轴承坏了会有什么症状| 为什么尿液一直是黄的| 列文虎克发现了什么| 商字五行属什么| 日柱代表什么| 宝宝低烧是什么原因引起的| 格桑花是什么花| 乙肝有什么症状| 空调自动关机是什么原因| 数字5代表什么意思| 兰花什么时候开| 烁字五行属什么| 催乳素是什么意思| 属虎的生什么属相的宝宝好| 什么的糯米| 今年什么时候进伏天| 三岁属什么生肖| ssc是什么意思| 阿达子是什么| 物美价廉是什么意思| 什么花最大| 梦到掉头发是什么意思| 毕业花束选什么花| 一只耳朵响是什么原因| 梦见采蘑菇是什么预兆| 什么是息肉| 提拔是什么意思| 拉肚子吃点什么食物好| 什么样的季节| 骨质疏松是什么症状| 胃痛可以吃什么| 一个日一个安念什么字| 好梦是什么意思| 为什么青霉素要做皮试| 衣钵是什么意思| 照顾是什么意思| 张宇的老婆叫什么名字| 糖尿病吃什么| 高铁与动车的区别是什么| 继女是什么意思| 除了肠镜还有什么方法检查肠道| 血管堵塞吃什么好疏通| 点状钙化灶是什么意思| 年收入10万什么水平| 毕加索全名是什么| 医院介入科是干什么的| 什么药治灰指甲最有效| 甲沟炎用什么药| 一吃东西就肚子疼是什么原因| 三严三实是什么| 用劲的近义词是什么| 荣字五行属什么| 肛门里面有个肉疙瘩是什么| 番茄和蕃茄有什么区别| 孕吐吃什么可以缓解| 知觉是什么意思| 誉之曰的之是什么意思| 炎热的夏天风儿像什么| 孕妇为什么要躲着白事| 天运子什么修为| 为什么腿会抽筋| 刑太岁是什么意思| 阴蒂长什么样| 大学生当兵有什么好处| 吃人肉会得什么病| 1989属什么| 口腔有异味是什么原因引起的| 高筋面粉是什么意思| 伏特加是什么酒| 庆帝为什么杀叶轻眉| 腹胀是什么感觉| 减肥平台期什么意思| 甲减吃什么药| 食管鳞状上皮增生是什么意思| 有心火是什么症状| 打边炉是什么| 炉甘石是什么东西| 伦字五行属什么| 为什么会长闭口粉刺| cbd是什么| 引狼入室是什么意思| 上环后同房要注意什么| 芒果不能和什么水果一起吃| 胎盘0级是什么意思啊| 信誓旦旦是什么意思| 夫妻是什么意思| 尿黄是什么原因| 岑字五行属什么| 动物都有什么| 女性提高免疫力吃什么| 伤官什么意思| 盗汗吃什么药效果最快| 为什么会脾虚| 羊水污染对宝宝有什么影响| 北京晚上有什么好玩的景点| 头皮屑多是什么原因引起的| 怀孕是什么感觉| 电子厂是做什么的| 是否是什么意思| 细菌性前列腺炎有什么症状| 什么是阻生牙| 宫颈肥大是什么原因造成的| 慢性胃炎要吃什么药| 肝郁脾虚是什么意思| 什么人容易得圆锥角膜| ebv病毒是什么| 情难自禁是什么意思| 什么星座最疼射手座| rem什么意思| 普拉提和瑜伽有什么区别| 井代表什么数字| 11五行属什么| 睾酮是什么| 吐黄水是什么原因| 尿液检查红细胞高是什么原因| 心脏属于什么组织| 杜撰是什么意思| 吃什么菜对肝好怎么养肝| 孩子长个子吃什么有利于长高| 脑脱髓鞘改变是什么病| 丹参粉有什么作用和功效| 鱼吐泡泡是什么原因| 脉管炎吃什么药最好| 人乳头瘤病毒56型阳性是什么意思| 72年属什么的生肖| 人各有命是什么意思| 亥时右眼跳是什么预兆| 摩羯座和什么星座最配| poc是什么| 什么米之乡| 小苏打有什么作用| 洁癖是什么意思| 皮肤长癣是什么原因| 什么是前奶什么是后奶| 韭菜什么人不能吃| 纺织娘是什么| 女性尿路感染有什么症状| 手术后喝什么鱼汤最好| 生快是什么意思| 随波逐流是什么意思| 二十年是什么婚| 孩子吐了吃什么药| 疤痕增生是什么样子| 高铁上为什么没有e座| 月经什么颜色的血是正常的| 龟苓膏有什么功效| 泌乳素高有什么症状| 除湿气吃什么| 拔节是什么意思| delsey是什么牌子| 吃什么可降低胆固醇| 什么是手足口病| h是什么| 颈椎病用什么枕头好| 检查尿液能查出什么病| 电解质水有什么好处| 人彘为什么还能活着| 亲子鉴定需要什么样本| 牙龈肿痛吃什么药最好| 130是什么意思| 大家闺秀是什么生肖| 伊朗用什么货币| 排尿无力是什么原因| 亲嘴有什么好处| 1222是什么星座| 熟地是什么| 脐炎用什么药| 春天像什么| 菩提心是什么意思| 今天是什么甲子| 92年什么命| 胰岛素抵抗是什么意思| 脖子肿大是什么病的症状| 脸上长痣是什么原因造成的| 紫苏叶有什么功效| cin是什么意思| 肾盂肾炎吃什么药好| 馒头配什么菜好吃| 胃肠性感冒吃什么药| 灌肤是什么意思| 无话不谈是什么意思| 12月17号什么星座| 下饭是什么意思| 女人练瑜伽有什么好处| 什么是伤官见官| 扑尔敏是什么药| 榆字五行属什么| 淀粉酶是查什么的| 公分是什么| 依从性是什么意思| 10月24是什么星座| 跖疣用什么药| 什么的气味| 花生什么时候成熟| 血脂和血糖有什么区别| 什么是肺腺瘤| 左室舒张功能减低吃什么药| 梦见鱼是什么预兆| 气质是什么| 一个壳一个心念什么| 纵横四海是什么意思| 北瓜是什么瓜| 舌头白是什么原因| 葡萄上的白霜是什么| 教育的目的是什么| 害怕是什么意思| 过期的酸奶有什么用途| 黄花菜长什么样子| 当枪使什么意思| 少一个睾丸有什么影响| 菩提根是什么材质| 眼睛经常充血是什么原因引起的| 碳水化合物是什么食物| 长期耳鸣是什么原因| vg是什么意思| 宫颈纳氏囊肿什么意思| 便秘吃什么药最好| 突破性出血是什么意思| 一颗颗什么| 白玉菩提是什么材质| 台球杆什么牌子的好| 为什么喝纯牛奶会拉肚子| 容易长痣是什么原因| 什么是av| 什么是地中海贫血| ab血型和o血型的孩子是什么血型| 什么是气| 打车用什么软件| 扩招是什么意思| 恢弘是什么意思| 烫伤忌口不能吃什么| 染色体异常是什么意思| 什么人容易得白血病| 百度

View in English

  • Global Nav Open Menu Global Nav Close Menu
  • Apple Developer
Search
Cancel
  • Apple Developer
  • News
  • Discover
  • Design
  • Develop
  • Distribute
  • Support
  • Account
Only search within “”

Quick Links

5 Quick Links

Videos

Open Menu Close Menu
  • Collections
  • Topics
  • All Videos
  • About

More Videos

  • About
  • Transcript
  • Code
  • 双语:10个好眠小贴士帮你摆脱晚睡强迫症强迫症睡眠咖啡因

    百度 按照规划的具体线路走向,宁扬城际,将从地铁4号线和2号线末端出发,过江到扬州。

    Supporting low latency encoders has become an important aspect of video application development process. Discover how VideoToolbox supports low-delay H.264 hardware encoding to minimize end-to-end latency and achieve new levels of performance for optimal real-time communication and high-quality video playback.

    Resources

    • Video Toolbox
      • HD Video
      • SD Video

    Related Videos

    WWDC21

    • What’s new in AVFoundation

    WWDC20

    • Edit and play back HDR video with AVFoundation
  • Search this video…

    ? ? Hi. My name is Peikang, and I’m from Video Coding and the Processing team. Welcome to “Exploring low-latency video encoding with Video Toolbox.” The low-latency encoding is very important for many video applications, especially real-time video communication apps. In this talk, I’m going to introduce a new encoding mode in Video Toolbox to achieve low-latency encoding. The goal of this new mode is to optimize the existing encoder pipeline for real-time applications. So what does a real-time video application require? We need to minimize the end-to-end latency in the communication so that people won’t be talking over each other.

    We need to enhance the interoperability by letting the video apps capable of communicating with more devices. The encoder pipeline should be efficient when there are more than one recipients in the call.

    The app needs to present the video in its best visual quality.

    We need a reliable mechanism to recover the communication from errors introduced by network loss.

    The low-latency video encoding that I’m going to talk about today will optimize in all these aspects. With this mode, your real-time application can achieve new levels of performance.

    In this talk, first I’m going to give an overview of low-latency video encoding. We can have the basic idea about how we achieve low latency in the pipeline. Then I’m going to show how to use VTCompressionSession APIs to build the pipeline and encode with low-latency mode. Finally, I will talk about multiple features we are introducing in low-latency mode. Let me first give an overview on low-latency video encoding. Here is a brief diagram of a video encoder pipeline on Apple’s platform. The Video Toolbox takes the CVImagebuffer as the input image. It asks the video encoder to perform compression algorithms such as H.264 to reduce the size of raw data.

    The output compressed data is wrapped in CMSampleBuffer, and it can be transmitted through network for video communication. As we may notice from the previous diagram, the end-to-end latency can be affected by two factors: the processing time and the network transmission time.

    To minimize the processing time, the low-latency mode eliminates frame reordering. A one in, one out encoding pattern is followed. Also, the rate controller in this mode has a faster adaptation in response to the network change, so the delay caused by network congestion is minimized as well. With these two optimizations, we can already see obvious performance improvements compared with the default mode. The low-latency encoding can reduce the delay of up to 100 milliseconds for a 720p 30fps video. Such saving can be critical for video conferencing.

    As we reduce the latency, we can achieve a more efficient encoding pipeline for real-time communications like video conferencing and live broadcasting.

    Also, the low-latency mode always uses a hardware-accelerated video encoder in order to save powers. Note, the supported video codec type in this mode is H.264, and we’re bringing this feature on both iOS and macOS.

    Next, I want to talk about how to use low-latency mode with Video Toolbox. I’m going to first recap the use of VTCompressionSession and then show you the step we need to enable low-latency encoding. When we use VTCompressionSession, the first thing is to create the session with VTCompressionSessionCreate API.

    We can optionally config the session, such as target bit rate, through VTSessionSetProperty API. If the configuration is not provided, the encoder will operate with the default behavior.

    After the session is created and properly configured, we can pass the CVImageBuffer to the session with VTCompressionSessionEncodeFrame call. The encoded result can be retrieved from the output handler provided during the session creation.

    Enabling low-latency encoding in the compression session is easy. The only change we need is in the session creation.

    Here is a code snippet showing how to do that. First we need a CFMutableDictionary for the encoderSpecification. The encoderSpecification is used to specify a particular video encoder that the session must use. And then we need to set EnableLowLatencyRateControl flag in the encoderSpecification.

    Finally, we need to give this encoderSpecification to VTCompressionSessionCreate, and the compression session will be operating in low-latency mode.

    The configuration step is the same as usual. For example, we can set the target bit rate with AverageBitRate property.

    OK, we’ve covered the basics of the low-latency mode with Video Toolbox. I’d like to move on to the new features in this mode that can further help you develop a real-time video application. So far, we’ve talked about the latency benefit by using the low-latency mode. The rest of the benefits can be achieved by the features I’m going introduce.

    The first feature is the new profiles. We enhanced the interoperability by adding two new profiles to the pipeline.

    And we are also excited to talk about temporal scalability. This feature can be very helpful in video conferencing.

    You can now have a fine-grained control over the image quality with max frame quantization parameter. Last, we want to improve the error resilience by adding the support of long-term reference.

    Let’s talk about the new profile support. Profile defines a group of coding algorithms that the decoder is capable to support. In order to communicate with the receiver side, the encoded bitstream should comply with the specific profile that the decoder supports.

    Here in Video Toolbox, we support a bunch of profiles, such as baseline profile, main profile, and high profile.

    Today we added two new profiles to the family: constrained baseline profile, CBP, and constrained high profile, CHP.

    CBP is primarily used for low-cost applications and CHP, on the other hand, has more advanced algorithms for better compression ratio. You should check the decoder capabilities in order to know which profile should be used.

    To request CBP, simply set ProfileLevel session property to ContrainedBaseLine_AutoLevel.

    Similarly, we can set the profile level to ContrainedHigh_AutoLevel to use CHP.

    Now let’s talk about temporal scalability. We can use temporal scalability to enhance the efficiency for multi-party video calls.

    Let us consider a simple, three-party video conferencing scenario. In this model, receiver "A" has a lower bandwidth of 600kbps, and receiver B has a higher bandwidth of 1,000kbps.

    Normally, the sender needs to encode two sets of bitstreams in order to meet the downlink bandwidth of each receiver side. This may not be optimal.

    The model can be more efficient with temporal scalability, where the sender only needs to encode one single bitstream but can be later divided into two layers.

    Let me show you how this process works.

    Here is a sequence of encoded video frames where each of the frames uses the previous frame as predictive reference.

    We can pull half of the frames into another layer, and we can change the reference so that only the frames in the original layer are used for prediction.

    The original layer is called base layer, and the new constructed layer is called enhancement layer.

    The enhancement layer can be used as a supplement of the base layer in order to improve the frame rate.

    For receiver "A," we can send base layer frames because the base layer itself is decodable already. And more importantly, since the base layer contains only half of the frames, the transmitted data rate will be low.

    On the other hand, receiver B can enjoy a smoother video since it has a sufficient bandwidth to receive base layer frames and enhancement layer frames.

    Let me show you the videos encoded using temporal scalability. I’m going to play two videos, one from the base layer, and the other from the base layer together with the enhancement layer.

    The base layer itself can be played normally, but at the same time, we may notice the video is not quite smooth.

    We can immediately see the difference if we play the second video. The right video has a higher frame rate compared with the left one because it contains both base layer and enhancement layer.

    The left video has 50% of the input frame rate, and it uses 60% of the target bit rate. These two videos only require the encoder to encode one single bitstream at one time. This will be much more power efficient when we are doing multi-party video conferencing.

    Another benefit of temporal scalability is error resilience. As we can see, the frames in the enhancement layer are not used for prediction, so there is no dependency on these frames.

    This would mean if one or more enhancement layer frames are dropped during network transmission, other frames won’t be affected. This makes the whole session more robust.

    The way to enable temporal scalability is pretty straightforward. We created a new session property in low-latency mode called BaseLayerFrameRateFraction. Simply set this property to 0.5, meaning half of the input frames are assigned to base layer and the rest are assigned to enhancement layer.

    You can check the layer information from the sample buffer attachment. For base layer frames, the CMSampleAttachmentKey_ IsDependedOnByOthers will be true, and otherwise it will be false.

    We also have the option to set the target bit rate for each layer. Remember that we use the session property AverageBitRate to config the target bit rate.

    After the target bit rate is configured, we can set the new BaseLayerBitRateFraction property to control the percentage of the target bit rate needed for the base layer.

    If this property is not set, a default value of 0.6 will be used. And we recommend the base layer bit rate fraction should range from 0.6 to 0.8.

    Now, let’s move to max frame quantization parameter, or max frame QP.

    Frame QP is used to regulate image quality and data rate.

    We can use low-frame QP to generate a high-quality image. The image size will be large in this case.

    On the other hand, we can use a high-frame QP to generate an image in low quality but with smaller size.

    In low-latency mode, the encoder adjusts frame QP using factors such as image complexity, input frame rate, video motion in order to produce the best visual quality under current target bit rate constraint. So we encourage to rely on the encoder’s default behavior for adjusting frame QP.

    But in some cases where the client has a specific requirement for the video quality, we now let you control the max frame QP that the encoder is allowed to use.

    With the max frame QP, the encoder will always choose the frame QP that is smaller than this limit, so the client can have a fine-grained control over the image quality.

    It’s worth mentioning that the regular rate control still works even with the max frame QP specified. If the encoder hits the max frame QP cap but is running out of bit rate budget, it will start dropping frames in order to maintain the target bit rate.

    One example of using this feature is to transmit screen content video over a poor network.

    You can make a trade-off by sacrificing the frame rate in order to send sharp screen content images. Setting max frame QP can meet this requirement.

    Let’s look at the interface. You can pass the max frame QP with the new session property MaxAllowedFrameQP.

    Keep in mind that the value of max frame QP must range from 1 to 51 according to the standard.

    Let’s talk about the last feature we’ve developed in low-latency mode, long-term reference.

    Long-term reference or LTR can be used for error resilience. Let’s look at this diagram showing the encoder, sender client, and the receiver client in the pipeline.

    Suppose the video communication goes through a network with poor connection. Frame loss can happen because of the transmission error.

    When the receiver client detects a frame loss, it can request a refresh frame in order to reset the session.

    If the encoder gets the request, normally it will encode a key frame for the refresh purpose. But the key frame is usually quite large.

    A large key frame takes a longer time to get to the receiver. Since the network condition is already poor, a large frame could compound the network congestion issue.

    So, can we use a predictive frame instead of a key frame for refresh? The answer is yes, if we have frame acknowledgement. Let me show you how it works.

    First, we need to decide frames that require acknowledgement. We call these frames long-term reference, or LTR. This is the decision from the encoder. When the sender client transmits an LTR frame, it also needs to request acknowledgement from the receiver client.

    If the LTR frame is successfully received, an acknowledgement needs to be sent back.

    Once the sender client gets the acknowledgement and passes that information to the encoder, the encoder knows which LTR frames have been received by the other side.

    Let’s look at the bad network situation again.

    When the encoder gets the refresh request, since this time, the encoder has a bunch of acknowledged LTRs, it is able to encode a frame that is predicted from one of these acknowledged LTRs.

    A frame that is encoded in this way is called LTR-P.

    Usually an LTR-P is much smaller in terms of encoded frame size compared to a key frame, so it is easier to transmit. Now, let’s talk about the APIs for LTR. Note that the frame acknowledgement needs to be handled by application layer. It can be done with mechanisms such as RPSI message in RTP Control Protocol.

    Here we’re only going to focus on how the encoder and the sender client communicate in this process.

    Once you have enabled low-latency encoding, you can enable this feature by setting EnableLTR session property.

    When an LTR frame is encoded, the encoder will signal a unique frame token in the sample attachment RequireLTRAcknowledgementToken.

    The sender client is responsible for reporting the acknowledged LTR frames to the encoder through AcknowledgedLTRTokens frame property. Since more than one acknowledgement can come at a time, we need to use an array to store these frame tokens.

    You can request a refresh frame at any time through ForceLTRRefresh frame property. Once the encoder receives this request, an LTR-P will be encoded. If there is no acknowledged LTR available, the encoder will generate a key frame in this case.

    All right. Now we’ve covered the new features in low-latency mode. We can talk about using these features together.

    For example, we can use temporal scalability and max frame quantization parameter for a group screen sharing application. The temporal scalability can efficiently generate output videos for each recipient, and we can lower the max frame QP for a sharper UI and text in the screen content.

    If the communication goes through a poor network and a refresh frame is needed to recover from the error, long-term reference can be used. And if the receiver can only decode constrained profiles, we can encode with constrained baseline profile or constrained high profile.

    OK. We’ve covered a few topics here. We’ve introduced a low-latency encoding mode in Video Toolbox.

    We’ve talked about how to use VTCompressionSession APIs to encode videos in low-latency mode.

    Besides the latency benefit, we also developed a bunch of new features to address the requirements for real-time video application. With all these improvements, I hope the low-latency mode can make your video app more amazing. Thanks for watching and have a great WWDC 2021. [upbeat music]

    • 5:03 - VTCompressionSession creation

      CFMutableDictionaryRef encoderSpecification =
                  CFDictionaryCreateMutable(kCFAllocatorDefault, 0, NULL, NULL);
      
      CFDictionarySetValue(encoderSpecification,
                           kVTVideoEncoderSpecification_EnableLowLatencyRateControl,
                           kCFBooleanTrue)
      
      VTCompressionSessionRef compressionSession;
      
      OSStatus err = VTCompressionSessionCreate(kCFAllocatorDefault, 
                                                width, 
                                                height,
                                                kCMVideoCodecType_H264, 
                                                encoderSpecification,
                                                NULL, 
                                                NULL, 
                                                outputHandler, 
                                                NULL,
                                                &compressionSession);
    • 7:35 - New profiles

      // Request CBP
      
      VTSessionSetProperty(compressionSession, 
                           kVTCompressionPropertyKey_ProfileLevel, 
                           kVTProfileLevel_H264_ConstrainedBaseline_AutoLevel);
      
      // Request CHP
      
      VTSessionSetProperty(compressionSession, 
                           kVTCompressionPropertyKey_ProfileLevel, 
                           kVTProfileLevel_H264_ConstrainedHigh_AutoLevel);

Developer Footer

  • Videos
  • WWDC21
  • Explore low-latency video encoding with VideoToolbox
  • Open Menu Close Menu
    • iOS
    • iPadOS
    • macOS
    • tvOS
    • visionOS
    • watchOS
    Open Menu Close Menu
    • Swift
    • SwiftUI
    • Swift Playground
    • TestFlight
    • Xcode
    • Xcode Cloud
    • SF Symbols
    Open Menu Close Menu
    • Accessibility
    • Accessories
    • App Extensions
    • App Store
    • Audio & Video
    • Augmented Reality
    • Design
    • Distribution
    • Education
    • Fonts
    • Games
    • Health & Fitness
    • In-App Purchase
    • Localization
    • Maps & Location
    • Machine Learning
    • Open Source
    • Security
    • Safari & Web
    Open Menu Close Menu
    • Documentation
    • Tutorials
    • Downloads
    • Forums
    • Videos
    Open Menu Close Menu
    • Support Articles
    • Contact Us
    • Bug Reporting
    • System Status
    Open Menu Close Menu
    • Apple Developer
    • App Store Connect
    • Certificates, IDs, & Profiles
    • Feedback Assistant
    Open Menu Close Menu
    • Apple Developer Program
    • Apple Developer Enterprise Program
    • App Store Small Business Program
    • MFi Program
    • News Partner Program
    • Video Partner Program
    • Security Bounty Program
    • Security Research Device Program
    Open Menu Close Menu
    • Meet with Apple
    • Apple Developer Centers
    • App Store Awards
    • Apple Design Awards
    • Apple Developer Academies
    • WWDC
    Get the Apple Developer app.
    Copyright ? 2025 Apple Inc. All rights reserved.
    Terms of Use Privacy Policy Agreements and Guidelines
    疯狂动物城里的狐狸叫什么 2008年属什么 吃什么促进伤口愈合 显著是什么意思 惊天动地是什么生肖
    什么是紫河车 暴饮暴食容易得什么病 唱过什么歌 淋巴细胞偏高是什么原因 五心烦热吃什么药最快
    少腹是什么意思 哀莫大于心死什么意思 月经量突然减少是什么原因 师弟是什么意思 女性生活疼痛什么原因
    张五行属什么 阳历一月份是什么星座 红艳桃花是什么意思 hpv都有什么症状 暨怎么读什么意思
    胃溃疡a2期是什么意思hcv8jop0ns3r.cn 双什么意思naasee.com 本意是什么意思hcv9jop3ns6r.cn 胜肽的主要功能是什么hcv8jop4ns7r.cn 吃什么对皮肤好还能美白的hcv8jop5ns9r.cn
    g1p1是什么意思hcv9jop1ns1r.cn 氯仿是什么hcv8jop1ns8r.cn 立牌坊是什么意思xinjiangjialails.com 萎缩性胃炎伴糜烂吃什么药hcv8jop9ns3r.cn bp什么意思hcv9jop6ns1r.cn
    局级是什么级别hcv9jop5ns0r.cn 意识是什么意思xinjiangjialails.com 生育登记服务单是什么baiqunet.com 12.28是什么星座hcv9jop1ns1r.cn 抹茶是什么hcv9jop2ns7r.cn
    陶土色大便是什么颜色hcv9jop2ns8r.cn 胃胀胃酸是什么原因hcv8jop0ns2r.cn 灰指甲是什么症状hcv9jop8ns0r.cn 25岁属什么hcv8jop2ns3r.cn 一什么树林0735v.com
    百度