请英语高手帮忙翻译一篇英语论文,专业点的! 10

不要直译,要专业点的,专业术语翻译准确,谢谢各位了!Fortheabovechannel,Figure3showsthedependenceontheSNR.Wedisp... 不要直译,要专业点的,专业术语翻译准确,谢谢各位了!
For the above channel, Figure 3 shows the dependence on the SNR. We display the ratio how often each method is superior over the other methods, in the sense that it is closer to the exact smoothing probabilities (a value of 0.8 for a method signifies, that this method was superior in 80% of the MonteCarlo runs). When the SNR is large, the best weights selection shows better performance, but the smaller the SNR, the better the random selection schemes get. For a SNR smaller than 6.5dB, the Chi-Square optimal sampling is the bestmethod. Figure 4 shows that the Kullback-Leibler optimal sampling is far closer to the Chi-Square optimal sampling than the best weights selection. This result is not surprising, since the ChiSquare method has the largest variability, followed by the Kullback-Leibler optimal sampling. Indeed, for a large SNR, where the results are less corrupted by the noise, the additional variability is counterproductive and the best weights selection is clearly superior. But for cases with small SNR,a method with a larger variability is better able to cope with the noise, resulting in the random methods being clearly superior.
The next simulations were carried out with channels which are randomly drawn from a uniform distribution for each MonteCarlo run. The results are similar for N = 100, as can be seen in Figure 5. But the differences between the different selection methods are less strong. In fact, for a smaller particle size N = 50, the best weights selection becomes superior over the other methods. This is due to the fact, that for larger particle sizes, it is possible to keep the states with the largest weights as well as keeping a number of randomly selected states.
We simulate an intermediate step of the EM-algorithm by randomly drawing the model parameters (channel coefficients) from a uniform distribution while the current estimate of the parameters are slightly deteriorated with respect to these correct parameters. The deterioration is also chosen to be uniformly random and in norm less than half the norm of the correct channel. Figure 8 shows that in this case as well the random methods are superior to the best weights selection.The differences between the two random methods are small, but the Chi-Square optimal sampling is a slightly more favorable method.
不要谷歌、金山那种直译的,句子不通顺,而且某些专业词翻译不出来,麻烦专业人士帮忙翻译下,谢谢了!
展开
 我来答
shenjiansong54
2010-01-07
知道答主
回答量:4
采纳率:0%
帮助的人:0
展开全部
基于上述渠道,图3显示了对信噪比的依赖。我们显示的比例多少往往是每个方法比其他方法优越的感觉,它更接近于准确的概率为平滑方法(价值为0.8意味着,该方法是80%,优于蒙特卡罗运行) 。当信噪比大,最好的权重选择上更好的性能,但较小的信噪比,更好的随机选择的计划得到。对于信噪比比六点五分贝小,卡方最佳采样是bestmethod。图4显示的Kullback -莱布勒最佳采样远接近卡方最优采样重量比最好的选择。这个结果并不令人惊讶,因为卡方方法,拥有最大的变化,其次是的Kullback -莱布勒最佳采样。的确,对于一个大信噪比下的结果,较少受噪音损坏,额外的变异是适得其反,最好的选择权明显优于。但小信噪比情况下,具有较大的可变性方法能够更好地应付噪音,是明显优于随机方法产生的。 <br>接下来的模拟,进行了随机是为每一个运行的均匀分布蒙特卡罗得出渠道。结果是相似的N = 100,可以在图5所示。但与此不同的选择方法的分歧更强烈。事实上,一个较小的颗粒大小为N = 50,体重最好的选择变得要好于其他方法。这是由于这一事实,即粒径较大,可以配合最大重量以及保存一份随机选择一些国家的国家。 <br>我们通过随机模拟模型参数绘制(通道系数从均匀分布),而目前估计的参数略有就这些正确的参数恶化的一个中间步骤的EM -算法。的恶化也可以选择随机和统一规范不到一半的正确途径规范。图8显示,在这种情况下,以及随机方法优于最佳重量selection.The两国随机方法的差异很小,但卡方最优抽样是一个稍微有利的方法。
已赞过 已踩过<
你对这个回答的评价是?
评论 收起
推荐律师服务: 若未解决您的问题,请您详细描述您的问题,通过百度律临进行免费专业咨询

为你推荐:

下载百度知道APP,抢鲜体验
使用百度知道APP,立即抢鲜体验。你的手机镜头里或许有别人想知道的答案。
扫描二维码下载
×

类别

我们会通过消息、邮箱等方式尽快将举报结果通知您。

说明

0/200

提交
取消

辅 助

模 式