Meridional Overturning - Depth Space
Description¶
This recipe shows how to calculate the Atlantic Meridional Overturning Circulation (AMOC) stream function in depth-coordinates using annual-mean outputs from the National Oceanography Centre Near-Present-Day global eORCA1 configuration of NEMO forced using JRA55-do from 1976-2024.
For more details on this model configuration and the available outputs, users can explore the Near-Present-Day documentation here.
Background¶
The vertical overturning stream function is routinely used to characterise the strength and structure of the AMOC in depth-space as a function of latitude $\phi$ and can be defined at time $t$ as follows:
$$\Psi_{z}(\phi, z, t) = \int_{z}^{\eta} \int_{x_w}^{x_e} v(\lambda, \phi, z, t) \ dx \ dz$$
where the meridional velocity $v(\lambda, \phi, z, t)$ is first integrated zonally between the western $x_w$ and eastern $x_e$ boundaries of the basin before being accumulated vertically from the sea surface $\eta$ to a specified depth $z(\lambda, \phi)$ (decreasing downward).
# -- Import required packages -- #
import matplotlib.pyplot as plt
import xarray as xr
from nemo_cookbook import NEMODataTree
xr.set_options(display_style="text")
<xarray.core.options.set_options at 0x30753fa10>
Using Dask¶
Optional: Connect Client to Dask Local Cluster to run analysis in parallel.
Note that, although using Dask is not strictly necessary for this simple example using eORCA1, if we wanted to generalise this recipe to eORCA025 or eORCA12 outputs, using Dask would be essential to avoid unnecessary slow calculations using only a single process.
# -- Initialise Dask Local Cluster -- #
import os
import dask
from dask.distributed import Client, LocalCluster
# Update temporary directory for Dask workers:
dask.config.set({'temporary_directory': f"{os.getcwd()}/dask_tmp",
'local_directory': f"{os.getcwd()}/dask_tmp"
})
# Create Local Cluster:
cluster = LocalCluster(n_workers=4, threads_per_worker=3, memory_limit='5GB')
client = Client(cluster)
client
Accessing NEMO Model Data¶
Let's begin by loading the grid variables for our eORCA1 JRA-55 model from the JASMIN Object Store.
Alternatively, you can replace the domain_filepath below with a file path to your domain_cfg.nc file and read this with xarray's open_dataset() function.
# Define directory path to ancillary files:
domain_url = "https://noc-msm-o.s3-ext.jc.rl.ac.uk/npd-eorca1-jra55v1/domain_cfg"
# Open eORCA1 NEMO model domain_cfg:
ds_domain = xr.open_zarr(domain_url, consolidated=True, chunks={})
ds_domain
<xarray.Dataset> Size: 709MB
Dimensions: (y: 331, x: 360, nav_lev: 75)
Coordinates:
* nav_lev (nav_lev) int64 600B 0 1 2 3 4 5 6 7 ... 68 69 70 71 72 73 74
* x (x) int64 3kB 0 1 2 3 4 5 6 7 ... 353 354 355 356 357 358 359
* y (y) int64 3kB 0 1 2 3 4 5 6 7 ... 324 325 326 327 328 329 330
Data variables: (12/49)
atlmsk (y, x) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
bathy_metry (y, x) float32 477kB dask.array<chunksize=(331, 360), meta=np.ndarray>
bottom_level (y, x) int32 477kB dask.array<chunksize=(331, 360), meta=np.ndarray>
e1f (y, x) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
e1t (y, x) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
e1u (y, x) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
... ...
top_level (y, x) int32 477kB dask.array<chunksize=(331, 360), meta=np.ndarray>
umask (nav_lev, y, x) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
umaskutil (y, x) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
vmask (nav_lev, y, x) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
vmaskutil (y, x) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
wmask (nav_lev, y, x) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
Attributes:
CfgName: UNKNOWN
CfgIndex: -999
Iperio: 1
Jperio: 0
NFold: 1
NFtype: F
VertCoord: zps
IsfCav: 0
file_name: mesh_mask.nc
TimeStamp: 01/03/2025 22:19:49 -0000
Next, we need to import the meridional velocity and vertical grid cell thicknesses stored at V-points in a single dataset.
# Define directory path to model output files:
gridV_url = "https://noc-msm-o.s3-ext.jc.rl.ac.uk/npd-eorca1-jra55v1/V1y"
# Construct NEMO model V-grid dataset, including vertical grid cell thicknesses (m) and meridional velocities (m/s):
ds_gridV = xr.open_zarr(gridV_url, consolidated=True, chunks={})
ds_gridV
<xarray.Dataset> Size: 9GB
Dimensions: (depthv: 75, axis_nbounds: 2, time_counter: 49,
y: 331, x: 360)
Coordinates:
* depthv (depthv) float32 300B 0.5058 1.556 ... 5.902e+03
nav_lat (y, x) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
nav_lon (y, x) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
time_centered (time_counter) datetime64[ns] 392B dask.array<chunksize=(1,), meta=np.ndarray>
* time_counter (time_counter) datetime64[ns] 392B 1976-07-02 ... 2...
Dimensions without coordinates: axis_nbounds, y, x
Data variables: (12/13)
depthv_bounds (depthv, axis_nbounds) float32 600B dask.array<chunksize=(25, 2), meta=np.ndarray>
e3v (time_counter, depthv, y, x) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
hfy (time_counter, y, x) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
hfy_adv (time_counter, y, x) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
hfy_diff (time_counter, y, x) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
somesatr (time_counter, y, x) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
... ...
time_centered_bounds (time_counter, axis_nbounds) datetime64[ns] 784B dask.array<chunksize=(1, 2), meta=np.ndarray>
time_counter_bounds (time_counter, axis_nbounds) datetime64[ns] 784B dask.array<chunksize=(1, 2), meta=np.ndarray>
v2o (time_counter, depthv, y, x) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
vmo (time_counter, depthv, y, x) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
vo (time_counter, depthv, y, x) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
vo_eiv (time_counter, depthv, y, x) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
Attributes:
name: OUTPUT/eORCA1_1y_grid_V
description: ocean V grid variables
title: ocean V grid variables
Conventions: CF-1.6
timeStamp: 2024-Dec-09 10:18:04 GMT
uuid: b77cb501-a564-4451-8908-d66fe201e54a
Creating a NEMODataTree¶
Next, let's create a NEMODataTree to store our domain and V-grid variables for the eORCA1 model.
# Define dictionary of grid datasets defining eORCA1 parent model domain with no child/grand-child nests:
# Note: domain_cfg z-dimension is expected to be named 'nav_lev'.
datasets = {"parent": {"domain": ds_domain, "gridV": ds_gridV}}
# Initialise a new NEMODataTree whose parent domain is zonally periodic & north-folding on F-points:
nemo = NEMODataTree.from_datasets(datasets=datasets, iperio=True, nftype="F", read_mask=True)
nemo
<xarray.DataTree 'NEMO model'>
Group: /
│ Dimensions: (time_counter: 49, axis_nbounds: 2)
│ Coordinates:
│ time_centered (time_counter) datetime64[ns] 392B dask.array<chunksize=(1,), meta=np.ndarray>
│ * time_counter (time_counter) datetime64[ns] 392B 1976-07-02 ... 2...
│ Dimensions without coordinates: axis_nbounds
│ Data variables:
│ time_centered_bounds (time_counter, axis_nbounds) datetime64[ns] 784B dask.array<chunksize=(1, 2), meta=np.ndarray>
│ time_counter_bounds (time_counter, axis_nbounds) datetime64[ns] 784B dask.array<chunksize=(1, 2), meta=np.ndarray>
│ Attributes:
│ nftype: F
│ iperio: True
├── Group: /gridT
│ Dimensions: (j: 331, i: 360, k: 75)
│ Coordinates:
│ gphit (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ glamt (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ * k (k) int64 600B 1 2 3 4 5 6 7 8 9 ... 68 69 70 71 72 73 74 75
│ * j (j) int64 3kB 1 2 3 4 5 6 7 8 ... 325 326 327 328 329 330 331
│ * i (i) int64 3kB 1 2 3 4 5 6 7 8 ... 354 355 356 357 358 359 360
│ Data variables:
│ e1t (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ e2t (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ top_level (j, i) int32 477kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ bottom_level (j, i) int32 477kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ tmask (k, j, i) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
│ tmaskutil (j, i) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ Attributes:
│ nftype: F
│ iperio: True
├── Group: /gridU
│ Dimensions: (j: 331, i: 360, k: 75)
│ Coordinates:
│ gphiu (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ glamu (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ * k (k) int64 600B 1 2 3 4 5 6 7 8 9 ... 68 69 70 71 72 73 74 75
│ * j (j) int64 3kB 1 2 3 4 5 6 7 8 ... 325 326 327 328 329 330 331
│ * i (i) float64 3kB 1.5 2.5 3.5 4.5 ... 357.5 358.5 359.5 360.5
│ Data variables:
│ e1u (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ e2u (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ umask (k, j, i) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
│ umaskutil (j, i) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ Attributes:
│ nftype: F
│ iperio: True
├── Group: /gridV
│ Dimensions: (k: 75, axis_nbounds: 2, time_counter: 49, j: 331,
│ i: 360)
│ Coordinates:
│ * depthv (k) float32 300B 0.5058 1.556 ... 5.698e+03 5.902e+03
│ time_centered (time_counter) datetime64[ns] 392B dask.array<chunksize=(1,), meta=np.ndarray>
│ gphiv (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ glamv (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ * k (k) int64 600B 1 2 3 4 5 6 7 ... 69 70 71 72 73 74 75
│ * j (j) float64 3kB 1.5 2.5 3.5 4.5 ... 329.5 330.5 331.5
│ * i (i) int64 3kB 1 2 3 4 5 6 ... 355 356 357 358 359 360
│ Dimensions without coordinates: axis_nbounds
│ Data variables: (12/17)
│ depthv_bounds (k, axis_nbounds) float32 600B dask.array<chunksize=(25, 2), meta=np.ndarray>
│ e3v (time_counter, k, j, i) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
│ hfy (time_counter, j, i) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
│ hfy_adv (time_counter, j, i) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
│ hfy_diff (time_counter, j, i) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
│ somesatr (time_counter, j, i) float32 23MB dask.array<chunksize=(1, 331, 360), meta=np.ndarray>
│ ... ...
│ vo (time_counter, k, j, i) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
│ vo_eiv (time_counter, k, j, i) float32 2GB dask.array<chunksize=(1, 25, 331, 360), meta=np.ndarray>
│ e1v (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ e2v (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ vmask (k, j, i) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
│ vmaskutil (j, i) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ Attributes:
│ name: OUTPUT/eORCA1_1y_grid_V
│ description: ocean V grid variables
│ title: ocean V grid variables
│ Conventions: CF-1.6
│ timeStamp: 2024-Dec-09 10:18:04 GMT
│ uuid: b77cb501-a564-4451-8908-d66fe201e54a
│ nftype: F
│ iperio: True
├── Group: /gridW
│ Dimensions: (j: 331, i: 360, k: 75)
│ Coordinates:
│ gphit (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ glamt (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ * k (k) float64 600B 0.5 1.5 2.5 3.5 4.5 ... 71.5 72.5 73.5 74.5
│ * j (j) int64 3kB 1 2 3 4 5 6 7 8 ... 325 326 327 328 329 330 331
│ * i (i) int64 3kB 1 2 3 4 5 6 7 8 ... 354 355 356 357 358 359 360
│ Data variables:
│ e1t (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ e2t (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ wmask (k, j, i) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
│ wmaskutil (j, i) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
│ Attributes:
│ nftype: F
│ iperio: True
└── Group: /gridF
Dimensions: (j: 331, i: 360, k: 75)
Coordinates:
gphif (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
glamf (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
* k (k) int64 600B 1 2 3 4 5 6 7 8 9 ... 68 69 70 71 72 73 74 75
* j (j) float64 3kB 1.5 2.5 3.5 4.5 ... 328.5 329.5 330.5 331.5
* i (i) float64 3kB 1.5 2.5 3.5 4.5 ... 357.5 358.5 359.5 360.5
Data variables:
e1f (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
e2f (j, i) float64 953kB dask.array<chunksize=(331, 360), meta=np.ndarray>
fmask (k, j, i) int8 9MB dask.array<chunksize=(75, 331, 360), meta=np.ndarray>
fmaskutil (j, i) int8 119kB dask.array<chunksize=(331, 360), meta=np.ndarray>
Attributes:
nftype: F
iperio: True
Calculating the AMOC vertical overturning stream function¶
Now we have constructed our NEMODataTree, let's calculate the vertical overturning stream function.
In this example, our eORCA1 model uses $z^{*}$ vertical coordinates, so using integral(), which supports integration along the $i$, $j$, $k$ dimensions of the NEMO model grid, is appropriate. However, this would not be the case if using a NEMO model with geopotential / terrain-following coordinates.
# Define Atlantic Ocean basin mask:
atlmask = ds_domain['atlmsk'].rename({"x":"i", "y":"j"}).astype(bool)
# Assign (i,j) coordinates of V-points:
atlmask['i'] = atlmask['i'] + 1
atlmask['j'] = atlmask['j'] + 1.5
# Compute vertical meridional overturning stream function:
moc_z_atl = nemo['gridV/vo'].integral(dims=["i", "k"],
cum_dims=["k"],
dir="+1",
mask=atlmask
)
moc_z_atl
<xarray.DataArray 'integral_ik(vo)' (time_counter: 49, k: 75, j: 331)> Size: 10MB
dask.array<nancumsum, shape=(49, 75, 331), dtype=float64, chunksize=(1, 25, 331), chunktype=numpy.ndarray>
Coordinates:
* time_counter (time_counter) datetime64[ns] 392B 1976-07-02 ... 2024-07-02
* depthv (k) float32 300B 0.5058 1.556 2.668 ... 5.698e+03 5.902e+03
time_centered (time_counter) datetime64[ns] 392B dask.array<chunksize=(1,), meta=np.ndarray>
* k (k) int64 600B 1 2 3 4 5 6 7 8 9 ... 68 69 70 71 72 73 74 75
* j (j) float64 3kB 1.5 2.5 3.5 4.5 ... 328.5 329.5 330.5 331.5
Notice that the resulting DataArray includes a dask array, so we haven't actually computed the vertical overturning yet. To do this, we need to call the .compute() method:
# Compute vertical overturning stream function in Sverdrups [1 Sv = 1E6 m3/s]:
moc_z_atl = 1E-6 * moc_z_atl.compute()
moc_z_atl
<xarray.DataArray 'integral_ik(vo)' (time_counter: 49, k: 75, j: 331)> Size: 10MB
array([[[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-5.93256017e-03, -5.98465019e-03, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.23823036e-02, -1.24593425e-02, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.93505099e-02, -1.93588036e-02, 0.00000000e+00],
...,
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-9.30921961e-01, -9.31773471e-01, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-9.30921961e-01, -9.31773471e-01, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-9.30921961e-01, -9.31773471e-01, 0.00000000e+00]],
[[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-9.31946621e-03, -9.62942149e-03, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-1.87225095e-02, -1.92967880e-02, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-2.81349402e-02, -2.89098083e-02, 0.00000000e+00],
...
-8.73345777e-01, -8.73976184e-01, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-8.73345777e-01, -8.73976184e-01, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
-8.73345777e-01, -8.73976184e-01, 0.00000000e+00]],
[[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
...,
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]],
shape=(49, 75, 331))
Coordinates:
* time_counter (time_counter) datetime64[ns] 392B 1976-07-02 ... 2024-07-02
* depthv (k) float32 300B 0.5058 1.556 2.668 ... 5.698e+03 5.902e+03
time_centered (time_counter) datetime64[ns] 392B 1976-07-02 ... 2024-07-02
* k (k) int64 600B 1 2 3 4 5 6 7 8 9 ... 68 69 70 71 72 73 74 75
* j (j) float64 3kB 1.5 2.5 3.5 4.5 ... 328.5 329.5 330.5 331.5
Visualising the time-mean AMOC vertical overturning stream function¶
Finally, let's visualise the results by plotting the time-mean Atlantic Meridional Overturning stream function in depth-coordinates:
plt.figure(figsize=(8, 4))
# Plot time-mean vertical meridional overturning stream function:
(moc_z_atl
.mean(dim='time_counter')
.plot(y='depthv', yincrease=False, cbar_kwargs={"label": "$\\psi_{z}(j, z)$ [Sv]"})
)
# Axes labels:
plt.title('eORCA1-JRA55v1: AMOC$_{z}$', fontdict={'size':12, 'weight':'bold'})
plt.xlabel('NEMO Model $j$ Coordinate', fontdict={'size':11, 'weight':'bold'})
plt.ylabel('Depth $z$ (m)', fontdict={'size':11, 'weight':'bold'})