We have created an activity in which students can visualize a theoretical neural network whose states evolve according to a well-known simple law. This activity provided an uncomplicated approach to a paradigm commonly represented through complex mathematical formulation. From their observations, students learned many basic principles and concepts of the dynamic systems, all necessary to understand the structure-dynamic relationship. This last relation is one of the main paradigms in the integrated curriculum of many health sciences professionals.

The theory of dynamic systems can improve the understanding of the structure-dynamic relationship in normal and pathological biological systems. Here, by calculating the dynamics, we propose a neural network representation as a teaching tool for the learning of the structure-function relationship. Usually, biological systems are very complex, and traditional statistical techniques have shown limited success in their modeling because of the basic assumption of linear combinations (8, 11). Artificial neural networks incorporate high-order interactions between predictive variables and have proven to be superior over linear data-mining methods in a number of areas of medical research (2, 6, 7), e.g., predicting patient survival (1, 5), the drug response (13), and modeling parts of the human body and recognizing diseases from various scans (e.g., cardiograms, computed axial tomography scans, ultrasonic scans, etc.) (12). In the activity, we proposed to students a theoretical “human” neural network, with each student representing a neuron. It was considered that, in a neural network, the structure is determined by all the neurons and their interconnections, including a law for updates of the neuron's states (3, 4). Through the study of the dynamics of the network, students learned that an important fraction of the structure of a complex system is given by the “interaction between its units” and not only by “the unit.” This is important because the interaction is not always apparent, and often it is not well known how it determines the dynamics. Finally, the interaction belongs to the structure and commands the dynamics.

### Modeling a Neural Network

For the students to know the dynamics of the neural network, each student represented a neuron (*i*; where *i* = 1, 2, … , 5) and wrote down his/her own neuronal threshold on a card, which he/she held in such a way that it could be easily seen. In the activity, a variable threshold was used. This allowed students a better understanding of the association between a variable threshold and the global dynamic effect in a neural network. In addition, it allowed us to demonstrate to our students that a threshold can be a component of the neural plasticity (usually found in biological neural networks). The connections between two neurons were simulated with cords. Connections went from a presynaptic neuron to a postsynaptic neuron. The synaptic strength was indicated by a scalar variable (input weight) written on a target displayed on the postsynaptic terminal of the connection. If there was no connection, then input weight was set as zero. If the connection was excitatory or inhibitory, the input weight was positive or negative, respectively. Every neuron in the network was connected to every other neuron but not to itself. Since the calculation of the total summed input only requires knowing the value of each input weight (4 per neuron) and the corresponding neuronal states, modeling of the network can be easily done without connecting the students with cords. However, we recommend following the initial suggestion because it provides the most explicit way to represent the network. We recognize that it is possible to add more image elements to highlight specific components of the network or the interconnection between neurons. We would rather leave this possibility open to the users of our teaching tool.

In the following sections, we describe some essential aspects used to obtain the dynamics of the neural network. At the beginning, a random selection of the initial state of the neural network was made. Depending on a coin toss, a neuron state can be “active” (1) or “inactive” (0). So, the state of a five-neurons network is represented by a five-component binary vector, in which the *i*th component corresponds to the *i*th neuron's state.

#### Calculation of the total summed input.

The influence on any neuron (postsynaptic neuron) induced by the other neurons (presynaptic neurons) is determined by the total summed input. The term “total summed input” is commonly used by practitioners in the field of computing science when describing artificial neural networks and replaces the term “postsynaptic potential,” which is used to describe biological neural networks. Throughout the article, we have replaced other terms used for biological neurons (i.e., synapse and synaptic strength) with those used when referring to artificial neurons (i.e., connection and input weight). The total summed input of the *i*th neuron is calculated as the sum of the partial inputs originating from the other neurons. Each partial input, generated on a postsynaptic neuron, is calculated by multiplying each presynaptic neuron state (1 or 0) times the corresponding input weight “toward” the postsynaptic neuron.

#### Update rule.

The calculation of a new neuronal state is accomplished by means of the following simple algorithm: “If the total summed input at *time t* is equal to or greater than the threshold, then, in *t* + 1, the neuron state will be 1; otherwise, it will be 0.”

#### Registry of data.

As an example of the above, Fig. 1 shows a five-neuron network where the network state, input weights, and thresholds are observed. To register the network dynamics, a table (such as that shown in Table 1) was used. A representation of the neural network by the students is shown in Fig. 2. The network represented is similar to the one shown in Fig. 1.

Mathematical formulation of the model used in the activity is included in appendix 1. This formulation was not necessary for the activity; however, we recommend using it to practice the mathematical modeling.

### Description of the Students' Work

The activity was made into the Workshop for the Integration of Basic Sciences course, for first-year medical students. They performed the following activities.

#### Work in small groups.

Students worked in small groups for 90 min; there was one instructor and seven students in each group, with five of the students representing the neural network. In *step 1*, a multiply connected “human” neural network was constructed, with each student representing a neuron, using random integers (from −10 to 10) for both neuronal couplings and thresholds. In *step 2*, the initial network state (initial configuration) was determined by means of choosing random values (1 or 0) for each neuron in the network. In *step 3*, the network dynamics (synchronous mode) were calculated; once an invariable final network behavior was observed, the final network state(s) could be described. Typically, this required between 5 and 15 updates for the network. In *step 4*, to study the dependence of the dynamics with respect to the initial network state, *step 3* was repeated after the initial state of some neurons of the network was changed. In *step 5*, to study the dependence of the dynamics with respect to the “all” set of structural parameters, *step 1* was repeated after new thresholds and couplings were selected. However, the same initial network configuration (obtained in *step 2*) was conserved. Finally, *step 3* was assessed to describe the final state of the network.

#### Work in plenary assembly.

Students worked in plenary assembly for 90 min. All the groups met with their instructors. In plenary assembly, students discussed their results with their instructors and with the other groups (*step 6*).

### Discussion

This work allowed the students to discuss the dependency of system dynamics with respect to both the initial state and structural parameters. Students were able to perceive how the system's interaction laws act like part of its structure and are one cause of its dynamics. Finally, the instructors could introduce the concepts of the dynamic systems appearing in appendix 2 and provide a framework to discuss some cognitive models, such as associative memory, learning, and perception processes, that were suggested by their conclusions about the structure-dynamics relationship. Some neural network models include only symmetric connections. For example, Hopfield's network defines symmetric connections, with all network states going to equilibrium, whenever asynchronous dynamics are used (9). In our work, we do not use symmetric connections, because they are not representative of couplings between biological neurons. When nonsymmetric connections are used, it allows many different dynamic behaviors and not just fixed points. However, we invite some student working groups to analyze symmetrical connections so that the class could discuss the consequences of both structures. Our activity provides opportunities for the understanding of several concepts associated to biological neural networks. Our theoretical model demonstrates that the election of the structural parameters constituting a complex system defines the resultant dynamics. Many times, much like biological neural networks, the resultant dynamics of the theoretical network are independent of small structural changes, such as changes in some initial states of the network, interconnections, or thresholds. In addition, our system allows for the visualization of the interdependence between the individual state of a neuron and the state of each of the other neurons in the network. This concept has special relevance during the future professional life of our students [i.e., they will have to consider that a psychotropic drug may influence a broader field (neural network) than the one that it directly influences]. In addition, the global dynamics of the artificial network may represent the capacity of a biological network for distributed storage, which has been proposed for some models of associative memory. Finally, theoretical neural networks exhibit a capability for parallel processing, like that of many biological neural networks (9, 10). Generally, the study of dynamic systems, and especially of neuronal networks, creates mathematical difficulties for most of the students in the first year of a biomedical career. These difficulties also usually extend to the availability of software to carry out the dynamic simulations (2). Thus, it is necessary to develop activities like the one we propose in which students, without a computer or extensive mathematical knowledge, can actively participate.

As has been suggested here, first-year medical students can develop a more dynamic and integrated vision of biological systems and can gain greater insight into the use of theoretical models using a very simple interactive activity. Taking into consideration the constrains associated to the use of a theoretical model, like the one proposed here, to biological system will be of help when instructors consider applying the proposed model to the physiological education field. Some recommendations to discuss the results are included in appendix 2.

## DISCLOSURES

No conflicts of interest, financial or otherwise, are declared by the author(s).

## ACKNOWLEDGMENTS

The authors thank Juan Montiel for contributions to the preparation of this manuscript. Elizabeth J. Kovacs and Gaylord J. Knutson are acknowledged for critical reading of the manuscript.

- Copyright © 2010 the American Physiological Society

## APPENDIX 1: MATHEMATICAL FORMULATION OF A THEORETICAL NEURONAL NETWORK

The following is a mathematical formulation of the model used in the activity. This formulation was not necessary for the activity; however, we recommend using it to practice the mathematical modeling.

#### Structural Parameters of the Neural Network

Let us consider a neural network with *N* neurons, each one denominated as the *i*th neuron (where *i* = 1 to *N*). *w*_{ij} is a parameter defined as the *j*th neuron's input weight on *i*th neuron, and *v*_{i} is the *i*th neuron's threshold. For simplicity, in the work with our students, we used only integers between −10 and 10 (including 0).

#### Neural Network State

The neural network state (configuration) at *time t* is a vector with *N* binary components [*S*_{1}(*t*), *S*_{2}(*t*), … , *S*_{N}(*t*)], so that the *i*th component [*S*_{i}(*t*)] is the *i*th neuron's state at *time t*.

#### Total Summed Input

The total summed input [*h*_{i}(*t*)] at *time t* on the *i*th neuron is defined according to the following function:

#### Calculation of Neural Network Dynamics

The *i*th neuron's state at *time t* + 1 [*S*_{i}(*t* + 1)] is defined according to the following function: *S*_{i}(*t* + 1) = *f*[*h*_{i}(*t*) − *v*_{i}], where the transfer function (*f*) is defined so that its value equals 1 [if *h*_{i}(*t*) − *v*_{i} ≥ 0] or 0 [if *h*_{i}(*t*) − *v*_{i} < 0].

The updating of the states is synchronous: all the network's neurons update their states simultaneously. *Time t* increases by one unit when all the network's neurons have been updated.

While it is not strictly necessary, a spreadsheet program can be used [the easiest is Excel (Microsoft Office)]. This software could help students to analyze the dynamics of a greater number of neurons at any time interval or propose diverse structural parameters (umbral and input weight) and different initial states of the network. In addition, this tool allows the instructor to verify the calculations.

## APPENDIX 2: SOME RECOMMENDATIONS TO DISCUSS THE RESULTS

#### Final Dynamics

There are stable states of equilibrium (neural network configurations of equilibrium) that are capable of attracting neighbor states, such as the last ones evolve toward the first ones. These stable states of equilibrium also are termed “attractor fixed points.” Alternatively, the state may not evolve toward a state of equilibrium but rather toward a finite succession of states. These successions of states are termed “limit cycles,” “attractor cycles,” or “stable cycles.” In addition, in an infinite network, the dynamics may be more diverse and complex, yielding chaotic dynamics. Fortunately, many of the dynamic patterns are shared by multiple different systems, so that the dynamic systems theory is a powerful integrative tool.

The student should understand that the entire dynamic is characterized by an initial state, a transient phase, and final dynamics. Final dynamics can be classified qualitatively according to a finite number of cases. In our model, we found two cases: stable states of equilibrium and attractor cycles.

#### Sensibility to Initial Conditions

Given a network structure (constant input weights and thresholds) and depending on the initial conditions (dependence or sensibility to initial conditions), different final dynamics can be obtained. In the model studied here, the larger the network and nearer the states, the more probably they belong to the same attraction basin of a state of equilibrium or a limit cycle (independence from the initial conditions).

#### Structural Stability

In the model shown here, given the same initial neural network state, sufficiently small changes in a few structural parameters failed to produce a change in the final dynamic patterns. This is known as structural stability and is a characteristic of most biological systems.

#### Information Distribution and Resolution of Problems

It can be understood that the codification of information in a neural network as a distributed phenomenon. A question could be codified as an initial network state, and the attractor state could codify the answer. The problem of the learning would consist of finding the best structure (input weights and thresholds) to obtain the right answer (final network state) due to a dynamic evolution starting in some question (initial network state). That is, the learning is in association with neural plasticity.

#### Robustness of the Nervous System

Despite some structural damage and loss of information, generally the nervous system can keep normal cognitive functions. Such properties of structural stability and distributed storage have been shown in our theoretical neural network.