中国科学院软件研究所机构知识库
Advanced  
ISCAS OpenIR  > 软件所图书馆  > 期刊论文
Title:
Transient Reward Approximation for Continuous-Time Markov Chains
Author: Hahn, EM ; Hermanns, H ; Wimmer, R ; Becker, B
Keyword: Continuous-time Markov chains ; continuous-time Markov decision processes ; abstraction ; symbolic methods ; ordered binary decision diagrams
Source: IEEE TRANSACTIONS ON RELIABILITY
Issued Date: 2015
Volume: 64, Issue:4, Pages:1254-1275
Indexed Type: SCI
Department: Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing, Peoples R China. Univ Saarland, D-66123 Saarbrucken, Germany. Univ Oxford, Oxford OX1 2JD, England. Univ Saarland, Dept Comp Sci, D-66123 Saarbrucken, Germany. Univ Freiburg, Breisgau, Germany. Univ Freiburg, Fac Engn, Chair Comp Architecture, Breisgau, Germany.
Abstract: We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.
English Abstract: We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.
Language: 英语
WOS ID: WOS:000366323500012
Citation statistics:
Content Type: 期刊论文
URI: http://ir.iscas.ac.cn/handle/311060/17427
Appears in Collections:软件所图书馆_期刊论文

Files in This Item:
File Name/ File Size Content Type Version Access License
07163373.pdf(4166KB)----限制开放 联系获取全文

Recommended Citation:
Hahn, EM,Hermanns, H,Wimmer, R,et al. Transient Reward Approximation for Continuous-Time Markov Chains[J]. IEEE TRANSACTIONS ON RELIABILITY,2015-01-01,64(4):1254-1275.
Service
Recommend this item
Sava as my favorate item
Show this item's statistics
Export Endnote File
Google Scholar
Similar articles in Google Scholar
[Hahn, EM]'s Articles
[Hermanns, H]'s Articles
[Wimmer, R]'s Articles
CSDL cross search
Similar articles in CSDL Cross Search
[Hahn, EM]‘s Articles
[Hermanns, H]‘s Articles
[Wimmer, R]‘s Articles
Related Copyright Policies
Null
Social Bookmarking
Add to CiteULike Add to Connotea Add to Del.icio.us Add to Digg Add to Reddit
所有评论 (0)
暂无评论
 
评注功能仅针对注册用户开放,请您登录
您对该条目有什么异议,请填写以下表单,管理员会尽快联系您。
内 容:
Email:  *
单位:
验证码:   刷新
您在IR的使用过程中有什么好的想法或者建议可以反馈给我们。
标 题:
 *
内 容:
Email:  *
验证码:   刷新

Items in IR are protected by copyright, with all rights reserved, unless otherwise indicated.

 

 

Valid XHTML 1.0!
Copyright © 2007-2019  中国科学院软件研究所 - Feedback
Powered by CSpace