Numerous measurement methods (metrics) have been proposed to measure software complexity, but these have been criticized for their lack of a theoretical framework which would serve as a guide for measurement methods. To fill this gap, we propose Wood’s task complexity model as a theoretical model which will make it possible to both capture and quantify software functional complexity. Wood’s model analyzes task complexity in three dimensions: component complexity, coordination complexity and dynamic complexity. We use the first two dimensions of the model to analyze software complexity at an early phase of the software lifecycle (analysis phase). The third dimension seems to be difficult to capture at this stage, and so has been ignored in our work. The empirical study in this paper aims to validate this conceptual model for software functional complexity. It also reports an effort estimation model built on 15 software maintenance projects of a telecom company. The result shows that this estimation model, which takes into account both component complexity and coordinative complexity, is better than the effort prediction model built on the linear regression between maintenance effort and COSMIC functional size for the same projects.
Cao-De Tran et A. Abran, "Measuring Software Functional Size: Towards an effective measure of complexity," in IEEE International Conference on software maintenance (ICSM 2002), Montréal, Canada, 2002.
Data manipulation, or algorithmic complexity, is not taken into account adequately in any of the most popular functional size measurement methods. In this paper, we recall some well-known methods for measuring problem complexity in data manipulation and highlight the interest in arriving at a new definition of complexity. Up to now, the concept of effort has always been associated with complexity. This definition has the advantage of dissociating effort and complexity, referring instead to the characteristics and intrinsic properties of the software itself. Our objective is to propose some simple and practical approaches to the measurement of some of these characteristics which we consider particularly relevant and to incorporate them into a functional size measurement method. 1. Introduction Software size in function points is a measure of the size of the product and can be used to evaluate and predict some aspects of the production process, like the effort, the cost and the productivity of software development. There are two main approaches to measuring software size: the a posterior approach, such as LOC , and the a priori approach, such as the methods based on software functionality [1, 2, 3, 4, 18, 22, 24, 25]. LOC is the simplest and the earliest method used to measure software size. While it is very useful, it has been greatly criticized [4, 11] for the way in which it defines a line of code and how it deals with different types of programming language. The a priori methods are gaining more and more attention in the software measurement community because they are independent of programming languages and they allow early estimation of the size of the end-product. When calibrated to the software environment, this provides a significant index for evaluating the development effort and assessing the cost of a software product . In fact, Albrecht’s Function Point Analysis (FPA) is widely used to measure the software size of management information systems (MIS). However, FPA is criticized for not taking into account complexity in an objective way . Therefore, FPA is not like ly to be applicable to all types of software . Some attempts have been made to adapt FPA to software types which are complex in terms of data manipulation, such as real-time software, in order to objectively estimate software complexity [1, 2, 18, 27, 24, 25]. The Mark-II Function Point Method  is one of the approaches for doing this, however it requires historical data, which may make it difficult to apply to an application without such data. There are other FPA extensions which deal with the special characteristics of software, expressing the algorithmic difficulties or the complexity of the process of transforming (or manipulating) data from inputs to produce expected