In this topic we see how session memory is affected by the number of rows selected and updated.

#### Rows Selected

In these tests one mutable measure is defined and no rows are updated. The number of rows selected varies from one query to the next.

##### Tests

Testing 0 Rows select 1 from t_100k where key = 0 model dimension by( key_1 ) measures( m_1 ) () ;

Testing 50K Rows select 1 from t_100k where key between 1 and 50000 model dimension by( key_1 ) measures( m_1 ) () ;

Testing 100K Rows select 1 from t_100k model dimension by( key_1 ) measures( m_1 ) () ;

Testing Non-Model Query 100K Rows select 1 from t_100k ;

##### Results

```
Non-Model Query
Metric 0 Rows 50K Rows 100K Rows 100K Rows
-------------------------- ------ -------- --------- ---------------
session pga memory max (K) 256 5,760 12,480 256
session uga memory max (K) 256 5,577 12,125 256
See Statistics Descriptions and Latches and Internal Locks for metric descriptions.
```

##### Obsservations

The number of rows selected affects the amount of session memory used by MODEL queries.

#### Rows Updated

In these tests one mutable measure is defined and the number of rows selected is constant. The number of rows updated varies from one query to the next.

##### Tests

Testing 0 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[0] = 'x' ) ;

Testing 189 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 189] = 'x' ) ;

Testing 190 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 190] = 'x' ) ;

Testing 252 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 252] = 'x' ) ;

Testing 253 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 253] = 'x' ) ;

Testing 315 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 315] = 'x' ) ;

Testing 316 Rows select key, m_1 from t_1k model dimension by( key ) measures( m_1 ) rules update ( m_1[key between 1 and 316] = 'x' ) ;

##### Results

```
189 190 252 253 315 316
Metric 0 Rows Rows Rows Rows Rows Rows Rows
-------------------------- ------ ---- ---- ---- ---- ---- -----
session pga memory max (K) 256 256 512 512 768 768 1,024
session uga memory max (K) 256 256 399 399 647 647 896
See Statistics Descriptions and Latches and Internal Locks for metric descriptions.
```

##### Observations

There are two things to note here. One is that session memory increased as the number of rows updated increased. Two is that session memory increased in chunks. Two similar queries that update different numbers of rows therefore may or may not consume the same amount of session memory.

#### Gotchas

Just because session memory increases with the number of rows updated we should not assume updating fewer rows will always lead to reduced session memory usage. For example, as suggested by the results in the Dimension References tutorial, using LIKE in dimension references causes MODEL queries to use the same amount of session memory regardless of how many rows are actually updated.

Testing 1000 Rows Updated select key2, m_1 from t_1k model dimension by( key2 ) measures( m_1 ) rules update ( m_1[key2 like '%'] = 'x' ) ;

Testing 100 Rows Updated select key2, m_1 from t_1k model dimension by( key2 ) measures( m_1 ) rules update ( m_1[key2 like '1__'] = 'x' ) ;

Testing 10 Rows Updated select key2, m_1 from t_1k model dimension by( key2 ) measures( m_1 ) rules update ( m_1[key2 like '1_'] = 'x' ) ;

```
1000 Rows 100 Rows 10 Rows
Metric Updated Updated Updated
-------------------------- --------- -------- -------
session pga memory max (K) 3,584 3,584 3,584
session uga memory max (K) 3,382 3,382 3,382
See Statistics Descriptions and Latches and Internal Locks for metric descriptions.
```

Warning: Results on your own systems with your own data will differ from these results. Results will even differ from one set of test runs to the next on the same machine. Run your own tests and average the results from multiple runs before making performance decisions.