缓存命中率

  • 网络cache hit rate;hit ratio;Buffer Cache Hit Ratio
缓存命中率缓存命中率
  1. 对模型的分析表明,IPWAN的性能是限制IP存储广域网最大性能的关键因素,提高缓存命中率、链路带宽和降低传播延迟将提高IP存储广域网的最大性能。

    The model analysis shows that performance of IP WAN is the key factor that restricts the maximum performance of IP-SWAN , and the maximum performance of IP-SWAN can be improved by enhancing cache hit rate 、 increasing link bandwidth and decreasing transmission delay .

  2. 从模拟实验结果来看,PGMC算法在缓存命中率和查询响应时间方面相对于以往的算法都有显著的提高和改进,具有更大的优越性。

    Results from the simulation experiments , the PGMC algorithm significantly increased and improved , cache hit rate and query response time compared to the previous algorithms with greater advantages .

  3. 研究表明Web缓存命中率可以达到30%-50%。

    Researches show that the hit ratio of web caching can attain 30 % - 50 % .

  4. 缓存命中率表示执行get的次数与错过get的次数的百分比。

    The cache hit ratio tells you the percentage of times you are performing a get versus the number of times that get misses .

  5. 如果Openedtables随着重新运行SHOWSTATUS命令快速增加,就说明缓存命中率不够。

    If Opened_tables increases quickly as you rerun the SHOW STATUS command , you aren 't getting enough hits out of your cache .

  6. 仿真实验结果表明,顺序增加算法能获得比LRU和Graph算法更高的缓存命中率和更低的访问延迟。

    As the simulation result shows , this sequential add algorithm has higher hit ratios and lower access latency than LRU and Graph Algorithm .

  7. 在较低的级别,我们关注高负载下的信号量、缓存命中率、I/O队列、CPU利用率、内存利用率和网络利用率。

    At the low level , we 're focusing on semaphores , cache hit rates , I / O queues , CPU utilization , memory utilization , and network utilization at a high load .

  8. 视频压缩算法在向DSP(Digitalsignalprocessing)平台上移植时,大多存在程序结构设计不合理、数据结构冗余等问题,因而会导致缓存命中率下降、DSP的利用率降低。

    Because of inappropriate design of program structure and the use of redundant data structure , when video compression algorithm is transplanted to DSP ( digital signal processing ) platforms , the cache hit ratio is low and the efficiency of DSP decreased .

  9. 实验表明,在同样的测试环境下,Prefetch-LARD算法比LARD算法的缓存命中率提高26.9%,系统的吞吐量相应提高18.8%。

    Experiments show that , Prefetch_ - LARD algorithm increases cache hit ratio up to 26.9 % and the throughput up to 18.8 % compared with LARD algorithm .

  10. 您获得的缓存命中率和未命中率是多少

    How may hits and misses you are getting

  11. 在此处,我们对读写缓存命中率感兴趣。

    Here , we are interested in the read and write cache hit rates .

  12. 由于我们一遍又一遍地顺序扫描一个巨大的表,这样我们就无法关注读缓存命中率。

    We cannot pay attention to read cache hit rate since we have been sequentially scanning a huge table over and over .

  13. 识别内存问题,包括较低的缓冲池命中率、较低的目录缓存命中率和较低的包缓存命中率。

    Identify problems with memory , including low buffer pool hit ratios , catalog cache hit ratios , and package cache hit ratios .

  14. 同时,在集群分发过程中应用该算法,可提高请求的调度效率和节点的缓存命中率。

    Meanwhile , applying the algorithm to the process of dispatching can improve the scheduling efficiency of requests and cache hit ratio of nodes .

  15. 另外,对于那些受益于文件系统预读功能或者较高缓冲区缓存命中率的应用程序,可能会出现性能降低。

    Further , applications that might benefit from having a file system read ahead or high buffer cache hit rates might actually see performance degradation .

  16. 缓存命中率不发生变化时,缓存空间越大I/O响应时间越长;

    Second , the bigger the cache size , the longer the I / O response time with the cache hit rate remaining the same .

  17. 通过数学建模和仿真分析对比了两种模式在缓存命中率、缓存缺失数目、映射条目以及通信中断概率等方面的性能。

    This dissertation compares the performance of the two modes in terms of cache hit rate , cache misses , mapping items and communication interruption probability , by building analytical models and doing trace driven simulation .

  18. 一般情况下,不鼓励在SQL中使用字面值,而是主张使用变量或参数,因为字面值会使每个语句具有独特性,这会降低缓存的命中率。

    Use of literals instead of variables or parameters is generally discouraged within SQL because it makes each statement unique , which drives down cache hit ratios .

  19. 这一缓存的命中率是99%。

    This cache has a99 % hit rate .

  20. 在访问效率方面,缓存的命中率偏低。

    On the aspect of data access efficiency , the hit - rate of cache is on the low side .

  21. 此系统综合了当前第4层和第7层调度技术的优点,避免了前端瓶颈问题,提高了整个集群的转发效率和缓存的命中率。

    The system combines virtues of popular layer-4 and layer-7 schedule system , avoids the FE bottle neck problem and brings higher efficiency and cache-hit rate .

  22. 通过对多核处理器的分析,发现影响其性能的关键因素有两个:一个是多核处理器二级缓存的命中率另一个是多核处理器线路的利用率。

    Through analyzing multi-core processors , we found that there are two key factors affecting its performance ,: one is L2 cache hit rate of multi-core processor . the other is the utilization rate of multi-core processor line .

  23. 对象缓存可以达到最高的缓存命中率;只要后端没有更改数据,存储在缓存中的此数据就有效。

    Object caching can achieve the highest rate of cache hits ; data stored in the cache is valid as long as the backend does not change this data .

  24. 如何建立一个高效的缓存替代策略来提高缓存的命中率并最大程度的降低不命中损失是一个重要的研究点。

    An important point is how to establish an efficient caching strategy with which we can enhance hit ratio and maximally reduce cache miss cost meanwhile .

  25. 本文最后的模拟实验结果显示基于地理信息匹配的缓存管理算法能大幅提升缓存命中率,测试结果证明该算法具有较高的现实应用价值。

    Finally , simulation results show that the geographically partial matching algorithm can significantly increase the cache hit rate , the test results prove that the algorithm has a higher value in real-world applications .

  26. 传输、访问和缓存(内核块缓冲区缓存)命中率的缓冲区活动

    Buffer activity for transfers , accesses , and cache ( kernel block buffer cache ) hit ratios

  27. 位于因特网骨干网和同一接入网之间的流媒体缓存代理服务器相互协作,可以提高缓存命中率,保持负载平衡。

    Cooperation of streaming media cache servers sited between the Internet backbone and an access network can increase the hit rate and keep load balancing .

  28. 通过实验对比,PLAC比其他位置相关缓存替换策略更为有效地提高了缓存命中率,缩短了查询平均响应时间。

    The contrast experiments show that the PLAC increases cache hit rate and shortens query average response time more effectively than other location dependent cache replacement strategies .

  29. 确定缓存效率的另一种方法是查看缓存的命中率(hitratio)。

    Another method of determining your caching effectiveness is to take a look at your cache hit ratio .

  30. 面向推理的上下文缓存置换算法CORA的目标是使上下文缓存达到较高命中率,有效节省普适计算中传输上下文的开销。

    A reasoning-oriented context replacement algorithm ( CORA ) is presented , which aims at promoting the hit rate of context cache and reducing the overhead of context transmission .