1000+ KPI
Привет. Если Вы хотя бы раз сталкивались с задачей создания системы ключевых показателей эффективности (KPI) для сотрудников или проектов, то скорее всего согласитесь, что задача эта не из легких. Решения «в лоб» здесь не работают, многие очевидные показатели на проверку оказываются неинформативными, дают искаженное представление о ситуации или могут быть легко искажены сотрудниками.
Что бы сократить драгоценное время, сохранить нервные клетки здоровыми и дать «вдохновение», предлагаю ознакомится со списком популярных KPI для IT. Возможно некоторые из них помогут с решением вашей задачи.
IT
Общее
# extra months spent for the implementation
# fixed bugs
# of alerts on exceeding system capacity thresholds
# of annual IT service continuity plan testing failures
# of business disruptions caused by (operational) problems
# of changes closed, relative to # of changes opened in a given time period
# of complaints received within the measurement period
# of failures of IT services during so-called critical times
# of incidents closed, relative to # of incidents opened in a given time period
# of incidents still opened
# of open incidents older than 15 days relative to all open incidents
# of open problems older than 28 days relative to all open problems
# of open service requests older than 28 days relative to all open service requests
# of overdue changes relative to # of open changes
# of overdue problems relative to # of open problems
# of requests closed, relative to # of requests opened in a given time period
# of Service Level Agreement (SLA) breaches due to poor performance
# of unmodified/neglected incidents
% accuracy of forecast against actuals of expenditure as defined in capacity plan
% accuracy of forecast against actuals of expenditure as defined in continuity plan
% applications with adequate user documentation and training
% bugs found in-house
% financial management processes supported electronically
% hosts missing high priority patches
% of (critical) infrastructure components with automated availability monitoring
% of actual uptime (in hours) of equipment relative to #s of planned uptime (in hours)
% of application / software development work outsourced
% of backlogged/neglected change requests
% of business process support of applications
% of closed service requests that have been escalated to management, relative to all closed service requests
% of Configuration Items (CIs) included in capacity reviews
% of Configuration Items (CIs) with under-capacity, relative to all CIs used to deliver services to end-customers
% of delivered changes implemented within budget/costs
% of efficient and effective technical business process adaptability of applications
% of incidents prior to the lifecycle
% of incidents solved within deadline
% of incidents that can be classified as a repeat incident, relative to all reported incidents
% of IT services that are not covered in the continuity plan
% of open service requests worked on
% of overdue incidents
% of overdue service requests
% of problems for which a root cause analysis was undertaken
% of problems resolved within the required time period
% of problems with a root cause identified for the failure
% of problems with available workaround
% of reopened incidents
% of reopened service requests
% of response-time SLAs
% of reviewed SLAs
% of service requests due to poor performance of services provided to end- customers
% of service requests posted via web (self-help)
% of service requests resolved within an agreed-upon/ acceptable period of time
% of SLAs with an assigned account manager
% of SLAs without service level breaches
% of time (in labor hours) used to coordinate changes relative to all time used to implement changes
% of unauthorized implemented changes
% of unplanned purchases due to poor performance
% of urgent changes
% of workarounds to service requests applied
ASL applications cycle management % of implemented changes without impact analysis
Average delay in SLAs review
Average problem closure duration
Average service request closure duration
Average spent duration of changes closed relative to the average allowed duration of those changes closed
Average time (hours) between the occurrence of an incident and its resolution
Average time (in days) between updates of Capacity Plan
Average time (in days) between updates of Continuity Plan
Average time spent (in FTE) on producing and keeping up-to-date of
Capacity Plans
Average time spent (in FTE) on producing and keeping up-to-date of
Continuity Plans
Business Value (BV) of application(s)
Change closure duration rate
Customer satisfaction (index)
First line service request closure rate
Gap between actual network usage and maximum capacity of the network
Problem queue rate
Ratio of # of incidents versus # of changes
Service request closure duration rate
Technical Value (TV) of application(s)
Time between reviews of IT continuity plan
Жизненный цикл проекта
Планирование и организация
# of conflicting responsibilities in the view of segregation of duties
% IT staff competent for their roles
% of budget deviation value compared to the total budget
% of IT budget spent on risk management (assessment and mitigation) activities
% of IT functions connected to the business
% of IT initiatives/projects championed by business owners
% of IT objectives that support business objectives
% of IT services whose costs are recorded
% of processes receiving Quality Assurance (QA) review
% of projects meeting stakeholder expectations
% of projects on budget
% of projects on time
% of projects receiving Quality Assurance (QA) review
% of projects with a post-project review
% of projects with the benefit (Return on Investment) defined up front
% of redundant and/or duplicate data elements as exist in the information architecture
% of repeat incidents
% of roles with documented position and authority descriptions
% of sick days (illness rate)
% of software applications that are not complying with the defined information architecture
% of software applications that do not comply to the defined technology standards
% of stakeholders satisfied with IT quality
% of stakeholders that understand IT policy
% of variation of the annual IT plan
Actual ratio vs. planned ratio of IT contractors to IT personnel
Average # of components under management per FTE
Delay in updates of IT plans after strategic updates
Frequency (in days) of enterprise IT control framework review/update
Frequency (in days) of review of the IT risk management process
Frequency (in days) of reviews of the existing infrastructure against the defined technology standards
Frequency (in days) of strategy and steering committee meetings
Frequency (in days) of updates to the information architecture
Frequency (in days) of updates to the technology standards
Overtime rate between employee overtime with the planned working times Ratio of IT contractors to IT personnel
Внедрение
# of application production problems (per application) causing visible downtime
# of bugs or software defects of applications (versions) that are in production
# of critical business processes supported by obsolete infrastructure
# of different technology platforms
# of infrastructure components that are no longer supportable
% of applications with adequate user and operational support training
% of business owners satisfied with application training and support materials
% of delivered projects where stated benefits were not achieved due to incorrect feasibility assumptions
% of development effort spent maintaining existing applications
% of feasibility studies signed off on by the business process owner
% of implemented changes not approved (by management / CAB)
% of infrastructure components acquired outside the acquisition process
% of key stakeholders satisfied with their suppliers
% of procurement requests satisfied by preferred suppliers
% of procurement requests satisfied by the preferred supplier list
% of procurements in compliance with standing procurement policies and procedures
% of projects on time and on budget
% of projects with a testing plan
% of Request for Proposals (RFP) that needed to be improved based on supplier responses
% of stakeholders satisfied with the accuracy of the feasibility study
% of systems that do not comply to the defined technology standards
% of users satisfied with the functionality delivered
Average # of responses received to Request for Proposals (RFP)
Average rework per change after implementation of changes
Average time to configure infrastructure components
Cost to produce/maintain user documentation, operational procedures and training materials
Satisfaction scores for training and documentation related to user and operational procedures
Software average time to procure
Time lag between changes and updates of training, procedures and documentation materials
Total rework (in FTE) after implementation of changes
Мониторинг и контроль
# of (critical) non-compliance issues identified
# of (major) internal control breaches, within measurement period
# of improvement actions driven by monitoring activities
# of IT policy violations
# of non-compliance issues reported to the board or causing public comment or embarrassment
# of recurrent IT issues on board agendas
# of weaknesses identified by external qualification and certification reports
% maturity of board reporting on IT to stakeholders
% maturity of reporting from IT to the board
% of critical processes monitored
% of metrics that can be benchmarked to (industry) standards and set targets
Age (days) of agreed-upon recommendations
Amount of delay to update measurements to reflect actual performance
Amount of effort required to gather measurement data
Average time lag between identification of external compliance issues and resolution
Average time lag between publication of a new law or regulation and initiation of compliance review
Cost of non-compliance, including settlements and fines
Frequency (in days) of board reporting on IT to stakeholders
Frequency (in days) of compliance reviews
Frequency (in days) of reporting from IT to the board
Frequency of independent reviews of IT compliance
Frequency of IT governance as an agenda item in the IT steering/strategy meetings
Stakeholder satisfaction with the measuring process
Time between internal control deficiency occurrence and reporting
Поддержка
# of business compliance issues caused by improper configuration of assets
# of deviations identified between the configuration repository and actual asset configurations
# of formal disputes with suppliers
# of incidents due to physical security breaches or failures
# of incidents of non-compliance with laws due to storage management issues
# of incidents of unauthorized access to computer facilities
# of incidents outside hours where security staff are present
# of incidents where sensitive data were retrieved after media were disposed
# of SLAs without service level breaches relative to # of SLAs under management
# of training hours divided by # of employees (in FTE)
# of violations in segregation of duties
# critical time outage
# devices per FTE
# incidents per PC
# incidents processed per service desk workstation
# IT service desk availability
# mean time to repair (MTTR)
# of complaints
# of training calls handled by the service desk
# of un-responded emails
% of (major) suppliers subject to monitoring
% of applications that are not capable of meeting password policy
% of availability Service Level Agreements (SLAs) met
% of budget deviation relative to total budget
% of critical business processes not covered by a defined service availability plan
% of delivered services that are not included in the service catalogue
% of disputed IT costs by the business
% of IT service bills accepted/paid by business management
% of licenses purchased and not accounted for in the configuration repository
% of outage due to incidents (unplanned unavailability)
% of personnel trained in safety, security and facilities measures
% of scheduled work not completed on time
% of service levels (in Service Level Agreements) reported in an automated way
% of service levels (in Service Level Agreements) that are actually measured
% of successful data restorations
% of systems where security requirements are not met
% of telephone calls abandoned by the caller while waiting to be answered
% of transactions executed within response time threshold
% of user complaints on contracted services as a % of all user complaints
% of users who do not comply with password standards
% incidents resolved remotely, without the need of a visit
% incidents solved by first point of contact
% incidents solved within SLA time
% incidents which changed priority during the life-cycle
% IT incidents fixed before users notice
% IT incidents solved within agreed response time
% neglected incidents
% of (re(-assignments of service requests
% of calls transferred within measurement period
% of customer issues that were solved by the first phone call
% of first-line resolution of service requests
% of incorrectly assigned incidents
% of incorrectly assigned service requests
% of terminal response time
% service requests posted via web (self-help)
Actual budget (costs) relative to the established budget
Amount of downtime arising from physical environment incidents
Average # of training days per operations personnel
Average time (in hours) for data restoration
Average time period (lag) between identifying a discrepancy and rectifying it
Average # of (re)-assignments of closed incidents within measurement period
Average # of calls / service request per handler
Average # of calls / service requests per employee of call center / service desk within measurement period
Average after call work time
Average after call work time (work done after call has been concluded) Average amount of time (e.g. in days) between the registration of changes and their closure
Average amount of time between the registration of incidents and their closure
Average days for lease refresh/upgrade fulfillment
Average days for software request fulfillment
Average incident response time
Average overdue time of overdue service requests
Average problem closure duration
Average TCP round-trip time
Downtime caused by deviating from operations procedures
Downtime caused by inadequate procedures
Time before help calls are answered
Total service delivery penalties paid
Frequency (in days) of physical risk assessment and reviews
Frequency (in days) of review of IT cost allocation model
Frequency (in days) of testing of backup media
Frequency (in days) of updates to operational procedures
Frequency of review of IT continuity plan
Unit costs of IT service(s) within measurement period
User satisfaction with availability of data
Бизнес
Обслуживание
# e-mail backlog
# of alerts on exceeding system capacity thresholds
# of transactions executed within response time threshold
% delivered services not in the service catalogue
% fully patched hosts
% of «dead» servers
% of (assigned) disk space quota used
% of disk space used
% of dropped telephone calls
% of failed transactions
% of network bandwidth used
% of network packet loss
% of transactions executed within response time threshold during peak-time
Adoption Rate
Application performance index
Average # of virtual images per administrator
Average % of CPU utilization
Average % of memory utilization
Average network throughput
Average response time of transactions
Average retransmissions of network packets
Average size of email boxes/storage
Corporate average data efficiency (CADE)
Datacenter power usage effectiveness
Maximum CPU usage
Maximum memory usage
Maximum response time of transactions
Mean opinion score (MOS)
Mean time to provision
Mean-time between failure (MTBF)
Доступность сервисов
# of developed new systems without downtime issues
# of integrate IT systems
# of outage due to incidents (unplanned unavailability)
# of reviews of management information systems (MIS)
% downtime (hours)
% effective usage of IT systems
% improvement of capacity of current systems
% mainframe availability
% of outage (unavailability) due to implementation of planned changes, relative to the service hours
% of outage (unavailability) due to incidents in the IT environment, relative to the service hours
% of outage due to changes (planned unavailability)
% of system availability
% of unplanned outage/unavailability due to changes
% suitability of IT Systems
Customer database availability
Total outage from critical time failures in IT services
Затраты
# of maintenance contracts
% cost adherence
% hardware asset value to total IT value
% IT security budget
Average age of hardware assets
Average cost to solve a problem
Average cost to solve an incident
Average costs of a release
Average costs of change implementation
Average penalty costs per SLA
Average costs of penalties paid on Service Level Agreements (SLAs)
Cost of CMDB reconciliation
Cost of consumable items such as ink, cartridges, cds etc
Cost of delivery
Cost of digital storage media
Cost of Infrastructure
Cost of leased equipment
Cost of maintenance per 1000 lines of code
Cost of producing and keeping up-to-date of Capacity Plans
Cost of producing and keeping up-to-date of Continuity Plans
Cost of purchase
Cost of security incidents
Cost of security incidents due to unauthorized access to systems
Cost of spares
Cost per device
Cost per PC
Cost per stored terabyte
Cost per terabyte transmitted
Costs associated to unplanned purchases to resolve poor performance Costs of operating a call center / service desk, usually for a specific period such as month or quarter
Costs of operating call center / service desk
Costs savings from service reuse
Cost of cleanup of virus/spyware incidents
Cost of CMDB reconciliation
Cost of finding and hiring one staff
Cost of managing processes
Cost of patches
Cost of producing capacity plans
Cost of producing continuity plans
Cost of professional certifications necessary
Cost of service delivery
Cost of skilled labor for support
Cost of support to the end users of IT assets
Cost per trouble report (man-hours)
Domain registrations costs
Facilities costs such as a dedicated server room with fire and air control systems
Financing costs
Hardware asset value
IT spending per employee
Labor cost for technical and user support
Net Present Value (NPV) of investment
Network costs determined by network demand and the bandwidth usage of the asset
Total cost of change implementation
Total cost of ownership
Total cost of release
Total cost to solve all incidents
Total cost to solve all problems
Time for maintenance scheduled and unscheduled
Time of usage of assets for unrelated activities such a gaming, chatting
Training costs of both IT staff and end users
Unit cost of IT services
Unit costs of IT service(s)
Use of assets for non-business purposes
Voice network — cost per minute
Эффективность
# frequency of IT reporting to the board
# of capabilities (services that can be rendered)
# of people working on a project versus the required #
# of services delivered on time
# Service level Agreements (SLA) breaches due to poor performance
# terabyte managed by one Full Time Equivalent (FTE)
# unique requirements
# watts per active port
% facility efficiency (FE)
% growth in business profits
% growth in market share
% growth in sales
% improved SLA’s
% IT projects with a testing plan
% Service level Agreements (SLAs) reviewed
% SLAs without service level breaches
% stock price appreciation
% time coordinating changes
% IT budget of total revenues
% IT capital spending of total investment
% of current initiatives driven by IT
% of current initiatives driven by the business
% of growth of IT budget
% of IT contribution in ROTA
% of IT costs associated to IT investment
% of IT costs associated to IT maintenance
% of IT labor outsourced
% of IT time associated to IT investment
% of IT training on IT operational costs
% of spend on current IT capital projects that are considered driven by the business
Average IT-related costs per customer
IT to total employees ratio
Ratio of % growth of IT budget versus % growth of revenues
Ratio of fixed price projects cost versus T&M projects cost
Actual capacity (# of people available & avoid new project traps)
Technology effectiveness index
Экология
% of energy used hem renewable sources («green energy»}
% of recycled printer paper
% of servers located in data centers
Corporate average data efficiency (CADE) measures data center efficiency across the corporate footprint
Datacenter power usage effectiveness (PUE)
Инфраструктура
# maximum memory usage
# of compliments received
# of incidents caused by changes vs. total # of incidents
# of incidents caused by inadequate capacity
# of open IT Infrastructure incidents older than 28 days relative to all open incidents
# of open IT Infrastructure problems older than 28 days relative to all open problems
# of open service requests older than 28 days
# of outstanding actions against last SLA review
# of printers divided by # of staff
# of problems closed
# of repeated incidents
# of untested releases
# of urgent releases
# power usage effectiveness
# propagation delay
% availability (excluding planned downtime)
% data center infrastructure efficiency
% disk space quota used
% incidents solved within SLA time
% of audited Configuration Items (Cl)
% of changes closed before deadline
% of closed service requests that were incorrectly assigned relative to all closed service requests
% of Configuration Items (Cl) mapped onto IT services in the CMDB
% of Configuration Items (Cl) monitored for performance
% of Configuration Items (Cl) under maintenance contract
% of Configuration Items (Cl) with under-capacity
% of customers given satisfaction surveys
% of delivered services not in the service catalogue
% of end user computers
% of end user printers
% of escalated service requests
% of fully documented SLAs
% of implemented changes without impact analysis
% of inaccurately registered Configuration Items (Cl) in CMDB
% of incidents not solved in-time due to inaccurate configuration data
% of incidents which change classification during the lifecycle
% of incidents which change priority during the lifecycle
% of internal hosts which are centrally managed & protected
% of IT staff that is ITIL trained
% of IT staff with (advanced) ITIL certification
% of money spent on maintaining the IT infrastructure versus the total IT spent
% of money spent on new IT developments (investments) relative to the total IT spent
% of open service requests that are not owned by a person or group
% of open service requests unmodified/neglected
% of overdue changes
% of overdue problems
% of project files containing cost-/benefit estimates
% of refused changes
% of routine changes indicates the maturity level of the process
% of security-related service calls
% of Service Level Agreements (SLAs) in renegotiation relative to all SLAs that are in production
% of Service Level Agreements (SLAs) requiring changes
% of service requests closed before deadline
% of services covered by SLA
% of SLA breaches caused by underpinning contracts
% of SLA reviews conducted on-time
% of software licenses used
% of successful software installations
% of successful software upgrades
% of time coordinating changes
% of unmodified/neglected incidents
% of unmodified/neglected problems
% of unregistered changes
% of vendor services delivered without agreed service targets
% on-time service level changes
% reduction of IPCS’s (Incident, Problem, Change, Service Request)
Average # of (re)-assignments of closed incidents
Average # of (re)-assignments of closed service requests within measurement period
Average change closure duration
Average rework (in FTE) per change after implementation of changes
Average size of discounts in procurement of items
Average time between audits of Configuration Items (CIs) as residing in the CMDB
Average time between CMDB reconciliation
Average time between urgent releases of software
Average time spent on CMDB reconciliation
Average time to procure an item
Balance of problems solved
Change queue rate
Delay in production of financial reports
First-call resolution rate
Forecast accuracy of budget
Growth of the CIVIDB
Incident impact rate incomplete CMDB
Mean Time To Detect (IVITTD)
Overall cost of IT delivery per customer
Ratio of # of incidents versus # of problems
Service call abandoned rate
Service request backlog
Service request queue rate
Support costs of all software based on their support contracts
The actual costs relative to the budgeted costs of an activity
Time lag between request for procurement and signing of contract or purchase
Total critical-time outage
Total rework after implementation of changes
Total service delivery penalties paid within a period
Резервирование данных
# applications data transfer time
# data center infrastructure efficiency
# deviations between configuration repository and actual configurations
# time for configuration management database (CMDB) reconciliation
% backup operations that are successful
% corporate average data efficiency
% data redundancy
% of backup operations that are successful
% of changes that required restoration of backup
% of changes that required restoration of backup during the implementation
% of physical backup / archive media that are fully encrypted
% of test backup restores that are successful
Age of backup
Average time between tests of backup
Average time to restore backup
Average time to restore off-site backup
Сеть
# link transmission time
# network latency
# of bytes received since the system started
# of bytes sent out to connections
# of commands sent
# of connection attempts made since the system started
# of connections currently waiting in the queue to be processed
# of connections that have failed to complete successfully
# of connections that successfully completed their transfer and confirmation
# of messages received by the system
# of the currently active connections that are open and sending information
# retransmission delay
# voice network minutes per FTE
% Internal servers centrally managed
% network bandwidth used
% network packet loss
% utilization of data network
Accuracy rate
Average connection time
Average network round trip latency
Average response speed
Connections per customer
Cost per byte
Total amount of time the system has been running in milliseconds
Total time the system started in UTC (days)
Операции
# of business disruptions caused by problems
# of compliments
# of deviations between configuration repository and actual configurations
# of incidents first month
# of outstanding actions of last SLA review
# of overdue changes
# of overdue incidents
# of overdue problems
# of overdue service requests
# of problems in queue
# of problems with available workaround
# of reopened incidents
# of reopened service requests
# of repeat incidents
# of reviewed SLAs
# of service requests posted via web (self-help)
# of SLA breaches due to poor performance
# of SLAs with an assigned account manager
# of SLAs without service level breaches
# of software licenses used
# of time coordinating changes
# of unauthorized implemented changes
# of unplanned purchases due to poor performance
# of unregistered changes
# of untested releases
# of urgent changes
% growth of the CMDB
% incidents assigned to a level of support
% incidents closed unsatisfactorily
% incidents resolved using a change
% incidents resolved with workaround
% of audited Configuration Items (Cl)
% of availability SLAs met
% of backed-out changes
% of calls transferred
% of Configuration Items (Cl) included in capacity reviews
% of escalated service requests
% of implemented changes not approved by management
% of incident classified as ‘major’
% of incident impact rate incomplete
% of incidents bypassing the support desk
% of incidents caused by a workaround
% of incidents closed by service provider
% of incidents closed satisfactorily
% of incidents expected to close next period by scheduled workaround or change
% of incidents for which a first interview completed
% of incidents for which entitlement is unconfirmed
% of incidents inbound versus outbound
% of incidents incorrectly classified
% of incidents incorrectly prioritized
% of incidents involving third-party agreement
% of incidents recorded ‘after the fact’
% of incidents rejected for reassignment
% of incidents resolved with non-approved workaround
% of incidents resulting from a service request
% of incidents resulting from previous incidents
% of incidents solved within deadline
% of incidents which change during the lifecycle
% of incidents with unmatched agreements
% of licenses purchased and not accounted for in configuration repository
% of obsolete user accounts
% of open service requests worked on
% of problems with a root cause analysis
% of problems with a root cause identified
% of response-time SLAs not met
% of service requests due to poor performance
% of service requests resolved within an agreed-upon period of time
% of services not covered in Continuity Plan
% of un-owned open service requests
% of unplanned outage/unavailability due to changes
% of workarounds to service requests applied
Accuracy of expenditure as defined in Capacity Plan
Accuracy of expenditure as defined in Continuity Plan
Availability
Availability (excluding planned downtime)
Average # of (re)-assignments of incidents
Average # of (re)-assignments of service requests
Average audit cycle of Configuration Items (Cl)
Average change closure duration
Average cycle time between urgent releases
Average incident closure duration
Average service request closure duration
Average time between same reconciliations
Average time between updates of capacity plan
Average time between updates of continuity plan
Average time period between identifying and rectifying a discrepancy
Average time spent on continuity plans
Change closure duration rate
Change queue rate
Critical-time failures
Critical-time outage
Deviation of planned budget for SLA
Email backlog
First line service request closure rate
First-call resolution rate
Frequency of review of IT continuity plan
Incident backlog
Incident queue rate
IT service continuity plan testing failures
Mean time in postmortem
Mean time in queue
Mean Time to Action (MTTA)
Mean Time to Escalation (MTTE)
Mean time to repair
Mean Time to ticket (MTTT)
Total changes after implementation
Total rework after implementation of changes
Total time in postmortem
Total time in queue
Total time spent on CMDB reconciliation
Total time to action (TTTA)
Total time to escalation (TTTE)
Total time to ticket (TTTT)
Контроль качества
# incident efficiency
# missing patches
# of back up & testing of computer systems
# of changes after the program is coded
# of changes to customer requirements
# of coding errors found during formal testing
# of cost estimates revised
# of defects found over period of time
# of documentation errors
# of error-free programs delivered to customer
# of errors found after formal test
# of keypunch errors per day
# of process step errors before a correct package is ready
# of reruns caused by operator error
# of revisions to checkpoint plan
# of revisions to plan
# of revisions to program objectives
# of test case errors
# of test case runs before success
# untested releases
% assignment content adherence
% availability errors
% change in customer satisfaction survey
% compliance issues caused by improper configuration of assets
% critical processes monitored
% critical time failures
% error in forecast
% error in lines of code required
% failed system transactions
% false detection rate
% fault slip through
% hours used for fixing bugs
% incidents after patching
% incidents backlog
% incidents queue rate
% of changes caused by a workaround
% of changes classified as miscellaneous
% of changes incorrectly classified
% of changes initiated by customers
% of changes insufficiently resourced
% of changes internal versus external
% of changes matched to scheduled changes
% of changes recorded ‘after the fact’
% of changes rejected for reassignment
% of changes scheduled outside maintenance window
% of changes subject to schedule adjustment
% of changes that cause incidents
% of changes that were insufficiently documented
% of changes with associated proposal statement
% of customer problems not corrected per schedule
% of defect-free artwork
% of input correction on data entry
% of problems uncovered before design release
% of programs not flow-diagrammed
% of reported bugs that have been fixed when going live
% of reports delivered on schedule
% of time required to debug programs
% of unit tests covering software code
Errors per thousand lines of code
Mean time between system IPL
Mean time between system repairs
QA personnel as % of # of application developers
QA personnel as a % of # of application developers
Time taken for completing a test of a software application
Total rework costs resulting from computer program
Безопасность
# detected network attacks
# exceeding alerts capacity threshold
# of detected network attacks
# of occurrences of loss of strategic data
# of outgoing viruses/spyware caught
# password policy violations
# security control
# time to detect incident
# unauthorized changes
# viruses detected in user files
% compliance to password policy
% computer diffusion rate
% downtime due to security incidents
% e-mail spam messages stopped
% employees with own ID and password for internal systems
% host scan frequency
% intrusion success
% IT security policy compliance
% IT security staff
% IT systems monitored by anti-virus software
% licenses purchased and not accounted for in repository
% modules that contain vulnerabilities
% of downtime due to security incidents
% of email spam messages stopped/detected
% of email spam messages unstopped/undetected
% of incidents classified as security related
% of patches applied outside of maintenance window
% of spam false positives
% of systems covered by antivirus/antispyware software
% of systems not to policy patch level
% of systems with latest antivirus/antispyware signatures
% of virus incidents requiring manual cleanup
% of viruses & spyware detected in email
% overdue incidents
% repeated IT incidents
% security awareness
% security incidents
% security intrusions detection rate
% servers located in data centers
% spam not detected
% trouble report closure rate
% virus driven e-mail incidents
% viruses detected in e-mail messages
Distribution cycle of patches
Latency of unapplied patches
Spam detection failure %
Time lag between detection, reporting and acting upon security incidents
Weighted security vulnerability density per unit of code
Разработка ПО
Общее
# of bugs per release
# of critical bugs compared to # of bugs
# of defects detected in the software divided by # of function points (FP)
# of defects per function point
# of defects per line of code
# of defects per use case point
# of escaped defects
# of realized features compared to # of planned features
# of software defects in production
# of successful prototypes
# software defects in production
# unapplied patch latency
% critical patch coverage
% defects reopened
% of application development work outsourced
% of bugs found in-house
% of hours used for fixing bugs
% of overdue software requirements
% of software build failures
% of software code check-ins without comment
% of software code merge conflicts
% of time lost re-developing applications as a result of source code loss
% of user requested features
% on time completion (software applications)
% overdue changes
% patch success rate
% routine changes
% schedule adherence in software development
% software build failures
% software code check-ins without comment
% software licenses in use
% software upgrades completed successfully
% unauthorized software licenses used
% unique requirements to be reworked
% user requested features
Average # defects created per man month
Average number of software versions released
Average progress rates (time versus results obtained)
Cyclomatic software code complexity
Halstead complexity
Lines of code per day
Rate of knowledge acquisition (progress within the research)
Rate of successful knowledge representation
System usability scale
Time ratio design to development
Time-to-market of changes to existing products/services
Time-to-market of new products/services
Work plan variance
Классы
# of logical code lines (One logical line may be split on several physical lines by a line continuation character)
# of all statements
# of ancestor classes
# of classes to which a class is coupled coupling is defined as method call or variable access
# of comment lines
# of constructors defined by class
# of control statements
# of declarative statements (procedure headers, variable and constant declarations, all statements outside procedures)
# of events defined by class (This metric counts the event definitions)
# of executable statements
# of executable statements
# of immediate sub-classes that inherit from a class
# of interfaces implemented by class
# of logical lines of whitespace
# of methods that can potentially be executed in response to a message received a class counts only the first level of the call tree
# of methods that can potentially be executed in response to a message received a class counts the full call tree
# of non-control statements, which are executable statements that areneither control nor declarative statements
# of non-private variables defined by class VARS excluding private variables
# of physical source lines (Including code, comments, empty comments and empty lines)
# of procedure calls going outside of a class (each call is counted once, whether it’s early bound, late bound or polymorphic)
# of subs, functions and property procedures in class
# of variables defined and inherited by class
# of variables defined by class (does not include inherited variables)
Size of class (# of methods and variables)
Size of class interface (# of non-private methods and variables)
Процедуры
# of distinct procedures in the call tree of a procedure
# of execution paths through a procedure (Cyclomatic complexity)
# of formal parameters defined in procedure header
# of global and module-level variables accessed by a procedure
# of input and output variables for a procedure (including parameters and function return value)
# of parameters used or returned by a procedure (output parameter)
# of procedure local variables and arrays (excluding parameters)
# of procedures that a procedure calls
# of procedures that call a procedure
% complexity inside procedures and between them
% external complexity of a procedure (# of other procedures called squared)
% internal complexity of a procedure (# of input/output variables)
% of Cyclomatic complexity without cases
Code lines count
Comment lines count
Fan-in multiplied by fan-out multiplied by procedure length (logical lines of code)
Length of procedure name in characters
Logical lines of code in call tree # of lines that may potentially execute in a call to this procedure
Logical lines of whitespace
Maximum # of nested conditional statements in a procedure
Maximum # of nested loop statements in a procedure
Maximum # of nested procedure calls from a procedure
Physical source lines (including code, comments, empty comments and empty lines)
Total amount of data read (procedures called + parameters read + global variables read)
Total amount of data written
Переменные
# of data flows into and out of a variable
# of modules that use a variable
# of read instructions from variable __
# of reads and writes A single instruction may count both as a read and as a write
# of write instructions to variable
Length of variable name in characters
Проект
# of abstract classes defined in project
# of actual couplings among classes in relation to the maximum # of possible couplings
# of class attributes (variables) hidden from other classes
# of class methods hidden from other classes
# of classes defined in project
# of concrete classes defined in project (a concrete class is one that is not abstract)
# of days passed between versions
# of enumeration constant names
# of enumeration names
# of files in project
# of global and module-level variables and arrays
# of interfaces defined in project
# of leaf classes defined in project (a leaf class has no descendants)
# of Physical lines in dead procedures
# of procedure call statements (including calls to subs, functions and declares, accesses to properties and the raising of events)
# of read instructions from global and module-level variables
# of reads from and writes to global and module-level variables
# of real forms excluding any User Controls
# of root classes defined in project
# of standard modules: bas files and Module blocks
# of unique names divided by # of names
# of unused constants
# of unused procedures
# of unused variables
# of user-defined types (or structure statements)
# of write instructions to global and module-level variables
% comment density (meaningful comments divided by # of logical lines of code)
% of actual polymorphic definitions of all possible polymorphic definitions
% of code lines counted from logical lines
% of enum constants among all constants
% of parameterized classes (generic classes)
% of reuse benefit reuse of procedures)
Amount of data flow via global and module-level variables versus procedure parameters and function return values
Average # of calls on a code line (measures the modularity or structuredness)
Average # of constants in an Enum block
Average # of variable access instructions per logical line of code
Average file date
Average length of all constant names defined in VB files
Average length of names of variables (arrays and parameters defined in VB files, excluding parameters in event handlers and implementing procedures)
Average system complexity among procedures
Classes that do access attributes / Classes that can access attributes
Classes that do access operations / Classes that can access operations
Date of newest file in project
Deadness index
Density of decision statements in the code
Length of names
Length of procedure names
Maximum depth of call tree
Maximum depth of inheritance tree
Maximum size of call tree
Project size in kilobytes (includes all source files)
Reuse ratio for classes (a class is reused if it has descendants)
Specialization ratio for classes (a class is specialized if it inherits from a parent class)
Sum of SYSC over all procedures (measures the total complexity of a project)
The average # of times reused constants and enum constants
The relative amount of internal inheritance (internal inheritance happens when a class inherits another class in the same system)
The sum of inherited methods divided by # of methods in a project
The sum of inherited variables divided by # of variables in a project
Файлы проекта
# of code lines count
# of constants (excluding enum constants)
# of control statements divided by # of all executable statements
# of files that a file uses
# of files that use a file
# of logical source lines
# of procedures (including subs, functions, property blocks, API declarations and events)
# of variables, including arrays, parameters and local variables
% of comment lines counted as full-line comments per logical lines
% of whitespace lines counted from logical lines
File size in kilobytes
Full-line and end-of-line comments that have meaningful content
Meaningful comments divided by # of logical lines of code
Веб
Сервер
# buffer size of router
# host latency
# of domain name (and aliases)
# of files on server
# of geographical locations
# of internet nodes mapped to same domain name
# of sub-sites
# of Web pages on server
# refused sessions by server
# server connection time
# server response time
Files by traffic % (e.g., % of files account for % of traffic)
HTTP node classification (inaccessible, redirection, accessible; these classifications will be time-sensitive; see volatility metric below)
Internet node identification (IP address and port)
Pages by traffic % (e.g., % of pages account for % of traffic)
Ratio of explicit clicks to implicit clicks for server
Server-side filtering (robotstxt, firewalls)
Top-level domain (com, edu)
Volatility level (summarizing the accessibility of the server during a given time period)
Пользователь
# of files transferred per user
# of pages transferred per user
# of unique files transferred per user
# of unique pages transferred per user
# of unique Web sites visited per user
# of user access method (ISP, dial-up modem, wireless network, etc)
# of Web sites visited per user
Data filtering imposed by user (which client filters have been activated by the user)
Inter-request time per user (request to request time)
Inter-session time per user (session to session time)
Intra-request time per user (request to render time)
Path length of sessions per user
Path length of visit per site per user
Ratio of embedded clicks to user-supplied clicks, per user per session
Ratio of explicit clicks to implicit clicks, per user per session
Reoccurrence rates for files, pages, and sites
Sessions per user per time period
Stack distance per user
Temporal length of sessions per user
Temporal length of visit per site per user
User classification (adult, child, professional user, casual user, etc)
User response rate and attrition rate
Сайт
# of bytes
# of cookie supplied
# of levels in site’s internal link structure (depth)
# of pages served per time period
# of search engines indexing the site
# of type of Web collections
# of unique Web sites (filter out Web sites located at multiple IP addresses)
# of user Web page requests per time period
# of Web collections
# of Web pages
# of Web servers
# of Web site publisher
# of Web sites
% breakdown of protocols across the periphery
% of site devoted to CGI/dynamic content
% of textual description of site’s content
Byte latency
Bytes transferred per time period
Network traffic (bytes transferred, Web pages accessed)
Ratio of size of core to size of periphery
Страницы
# and type of embedded non-text objects (images, video, streaming data, applets)
# of content access scheme (free, pay-per-view, subscription)
# of type of collection (online journal, photo gallery)
# of Web pages in collection
% breakdown of mime types in hyperlinks
% breakdown of protocols in hyperlinks
% of textual description of page’s content
Aggregate size of constituent Web resources (in bytes)
Average # of hyperlinks per page
Birth and modification history (major revisions of content — from HTTP header)
Ratio of internal to external links on page
Обучение
# of attendees at user training sessions
# of hours users have spent on training services
# of incidents caused by deficient user and operational documentation and training
# of incidents caused by deficient user training
# of users turned out successfully
Hours of user training
IT investment to IT staff training
Satisfaction scores for training and documentation
Time lag between changes and updates of documentation and training material
Автор: Majestic12