Matplotlib 线段宽度的右对齐

Matplotlib 中Text有horizontalAlignment 和 verticalAlignment属性对应文字的水平对齐和垂直对齐方式,但是对于线段(plot),找遍全网却没有找到令线段宽度根据中法线左右对齐的解决方法。

好在通过查阅Matplotlib的Transformations教程,找到了使用ScaledTranslation另线段偏移(offset)的方法(https://matplotlib.org/3.2.2/tutorials/advanced/transforms_tutorial.html#using-offset-transforms-to-create-a-shadow-effect)。

通过简单的三角计算可以实现目标(图1)。

图1

直接上代码:

def segmentOffset(fig, x, y, linewidth, lr="left"):
    import numpy as np
    import matplotlib.transforms as transforms

    r = {"left": (-1, 1), "right": (1, -1)}[lr]
    delta = np.squeeze(np.diff([x, y]))
    del_p = np.hypot(*delta)
    dpi_scale = np.diag(fig.dpi_scale_trans.get_matrix())
    dx, dy = delta[::-1] * linewidth / dpi_scale[:2] / del_p / 2.0 * r
    offset = transforms.ScaledTranslation(dx, dy, fig.dpi_scale_trans)
    return offset

示例和图

fig, ax = plt.subplots(figsize=(5, 5))
x = [1, 4]
y = [1, 5]
plt.axis("equal")
lw = 10
ax.plot(x, y, color="k", ls=":", lw=2)
ax.plot(x, y, color="g", lw=lw, alpha=0.5, label="origin")
ax.plot(
    x,
    y,
    color="r",
    lw=lw,
    alpha=0.5,
    label="offset_left",
    transform=ax.transData + segmentOffset(fig, x, y, lw, lr="left"),
)
ax.legend()
图2

为什么要弄这个,因为之前在描最大风暴潮沿岸分布的时候,内部线的线宽在拐角地形处会盖过外边的线段。图3,图4

图3 线宽10,中对齐
图4 线宽5,右对齐(left offset),空白是因为网格边界和实际大陆边界有点差异。

还是有瑕疵但这也算性价比较高的解决方案了。

How to download NASA Global Precipitation Measurement(GPM) DATASET

1Prepare an account of https://urs.earthdata.nasa.gov/ and make sure have the “NASA GESDISC DATA ARCHIVE” apps in your approved applications list. Here’s the guidance: https://disc.gsfc.nasa.gov/earthdata-login

2Search and choose gpm data you need in: https://disc.gsfc.nasa.gov/datasets . Click [Get Data] and set refining options, then press [Get Data] to get the download links list.

3The urls in download list file is not accessible directly, you need your account registed before. The downloading requires authentication but the HTTPAuth will failed(redirection reason or others), which means you have to get the authorized cookies before you multiply request the urls. For pythoner, you can use web browser to open a download link and login first and then you can get the authorized cookies from the request headers.

And code will be like this:

r = requests.get(url, cookies={'nasa_gesdisc_data_archive':'xxxxxx'}) 
f = open(filepath, 'wb')
f.write(r.content)
f.close()

4But the download speed of python ‘requests’ module is comparable with a turtle. So below gives a ‘wget’ method which is quite faster(for Mac/Linux).

  1. Create a .netrc file in your home directory to save the login message for wget.
    • echo “machine urs.earthdata.nasa.gov login <username> password <password>” >> .netrc
    • chmod 0600 .netrc #(to be safe)
  2. Create a cookie file to record cookies and persist sessions across wget.(This can be any where in case refer a correct path in wget command.
    • touch .urs_cookies
  3. Download multiple data files using wget -i:
    • wget –load-cookies .urs_cookies –save-cookies .urs_cookies –auth-no-challenge=on –keep-session-cookies –content-disposition -i <list.txt> #(list txt contains urls for download files).

DONE.