Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
Y-nuclear committed Jul 2, 2024
2 parents 1de9239 + 523edb6 commit de3eb76
Show file tree
Hide file tree
Showing 4 changed files with 96 additions and 63 deletions.
24 changes: 23 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
![PyPI - License](https://img.shields.io/pypi/l/gnnwr)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/gnnwr)
[![PyPI - Version](https://img.shields.io/pypi/v/gnnwr)](https://pypi.org/project/gnnwr/)
![PyPI - Downloads](https://img.shields.io/pypi/dm/gnnwr)
[![Downloads](https://static.pepy.tech/badge/gnnwr)](https://pepy.tech/project/gnnwr)


A PyTorch implementation of the spatiotemporal intelligent regression (STIR) models and the repository contains:
Expand Down Expand Up @@ -158,6 +158,14 @@ Housing prices are closely related to the lives of new urban residents, and they

> Wang, Z., Wang, Y., Wu, S., & Du, Z. (2022). House Price Valuation Model Based on Geographically Neural Network Weighted Regression: The Case Study of Shenzhen, China. *ISPRS International Journal of Geo-Information*, *11*(8), 450.
An optimized spatial proximity measure was integrated into GNNWR. The optimized spatial proximity fusions multiple distance measures, imporving its ability in modeling spatial-nonstationary process.

<p align="center">
<img title="OSP" src="assets/figure OSP.JPG" alt="OSP" width=75%>
</p>

> Ding, J., Cen, W., Wu, S., Chen, Y., Qi, J., Huang, B., & Du, Z. (2024). A neural network model to optimize the measure of spatial proximity in geographically weighted regression approach: a case study on house price in Wuhan. *International Journal of Geographical Information Science*, 1–21.
#### 4.3.2 Land Surface Temperature

Spatial downscaling is an important approach to obtain high-resolution land surface temperature (LST) for thermal environment research. A high-resolution surface temperature downscaling method based on GNNWR was developed to effectively handle the problem of surface temperature downscaling. The results show that the proposed GNNWR model achieved superior downscaling accuracy compared to widely used methods in four test areas with large differences in topography, landforms, and seasons. The findings suggest that GNNWR is a practical method for surface temperature downscaling considering its high accuracy and model performance.
Expand All @@ -168,6 +176,18 @@ Spatial downscaling is an important approach to obtain high-resolution land surf

> Liang, M., Zhang, L., Wu, S., Zhu, Y., Dai, Z., Wang, Y., ... & Du, Z. (2023). A High-Resolution Land Surface Temperature Downscaling Method Based on Geographically Weighted Neural Network Regression. *Remote Sensing*, *15*(7), 1740.
### 4.4 Geology

#### 4.4.1 mineral prospectivity

In the field of mineral forecasting, accurate prediction of mineral resources is essential to meet the energy needs of modern society. A geographically neural network-weighted logistic regression is used for mineral prospect mapping. The model combines spatial patterns and neural networks with Shapley's additive theory of interpretation, which effectively handles the anisotropy of variables and nonlinear relationships between variables to achieve accurate predictions and provide explanations of mineralization in a complex spatial environment.

<p align="center">
<img title="mineral prospectivity" src="assets/figure_mine.jpg" alt="mineral prospectivity" width=75%>
</p>

> Wang, L., Yang, J., Wu, S., Hu, L., Ge, Y., & Du, Z. (2024). Enhancing mineral prospectivity mapping with geospatial artificial intelligence: A geographically neural network-weighted logistic regression approach. *International Journal of Applied Earth Observation and Geoinformation*, 128, 103746.
**!!Further, these spatiotemporal intelligent regression models can be applied to other spatiotemporal modeling problems and socioeconomic phenomena.**

## 5 Related Research Papers
Expand All @@ -186,6 +206,8 @@ Spatial downscaling is an important approach to obtain high-resolution land surf
5. Liu, C., Wu, S., Dai, Z., Wang, Y., Du, Z., Liu, X., & Qiu, C. (2023). High-Resolution Daily Spatiotemporal Distribution and Evaluation of Ground-Level Nitrogen Dioxide Concentration in the Beijing–Tianjin–Hebei Region Based on TROPOMI Data. *Remote Sensing*, *15*(15), 3878.
6. Wang, Z., Wang, Y., Wu, S., & Du, Z. (2022). House Price Valuation Model Based on Geographically Neural Network Weighted Regression: The Case Study of Shenzhen, China. *ISPRS International Journal of Geo-Information*, *11*(8), 450.
7. Wu, S., Du, Z., Wang, Y., Lin, T., Zhang, F., & Liu, R. (2020). Modeling spatially anisotropic nonstationary processes in coastal environments based on a directional geographically neural network weighted regression. *Science of the Total Environment*, *709*, 136097.
8. Wang, L., Yang, J., Wu, S., Hu, L., Ge, Y., & Du, Z. (2024). Enhancing mineral prospectivity mapping with geospatial artificial intelligence: A geographically neural network-weighted logistic regression approach. *International Journal of Applied Earth Observation and Geoinformation*, 128, 103746.
9. Ding, J., Cen, W., Wu, S., Chen, Y., Qi, J., Huang, B., & Du, Z. (2024). A neural network model to optimize the measure of spatial proximity in geographically weighted regression approach: a case study on house price in Wuhan. *International Journal of Geographical Information Science*, 1–21.


## 6 Contributing
Expand Down
Binary file added assets/figure OSP.JPG
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added assets/figure_mine.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
135 changes: 73 additions & 62 deletions src/gnnwr/networks.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,68 @@ def default_dense_layer(insize, outsize):
size = int(math.pow(2, int(math.log2(size)) - 1))
return dense_layer

class LinearNetwork(nn.Module):
"""
LinearNetwork is a neural network with dense layers, which is used to calculate the weight of features.
| The each layer of LinearNetwork is as follows:
| full connection layer -> batch normalization layer -> activate function -> drop out layer
Parameters
----------
dense_layer: list
a list of dense layers of Neural Network
insize: int
input size of Neural Network(must be positive)
outsize: int
Output size of Neural Network(must be positive)
drop_out: float
drop out rate(default: ``0.2``)
activate_func: torch.nn.functional
activate function(default: ``nn.PReLU(init=0.1)``)
batch_norm: bool
whether use batch normalization(default: ``True``)
"""
def __init__(self, insize, outsize, drop_out=0, activate_func=None, batch_norm=False):
super(LinearNetwork, self).__init__()
self.layer = nn.Linear(insize, outsize)
if drop_out < 0 or drop_out > 1:
raise ValueError("drop_out must be in [0, 1]")
elif drop_out == 0:
self.drop_out = nn.Identity()
else:
self.drop_out = nn.Dropout(drop_out)
if batch_norm:
self.batch_norm = nn.BatchNorm1d(outsize)
else:
self.batch_norm = nn.Identity()

if activate_func is None:
self.activate_func = nn.Identity()
else:
self.activate_func = activate_func
self.reset_parameter()

def reset_parameter(self):
torch.nn.init.kaiming_uniform_(self.layer.weight, a=0, mode='fan_in')
if self.layer.bias is not None:
self.layer.bias.data.fill_(0)

def forward(self, x):
x = x.to(torch.float32)
x = self.layer(x)
x = self.batch_norm(x)
x = self.activate_func(x)
x = self.drop_out(x)
return x

def __str__(self) -> str:
return f"LinearNetwork: {self.layer.in_features} -> {self.layer.out_features}\n" + \
f"Dropout: {self.drop_out.p}\n" + \
f"BatchNorm: {self.batch_norm}\n" + \
f"Activation: {self.activate_func}"

def __repr__(self) -> str:
return self.__str__()

class SWNN(nn.Module):
"""
Expand Down Expand Up @@ -68,26 +130,15 @@ def __init__(self, dense_layer=None, insize=-1, outsize=-1, drop_out=0.2, activa
self.fc = nn.Sequential()

for size in self.dense_layer:
# add full connection layer
self.fc.add_module("swnn_full" + str(count),
nn.Linear(lastsize, size, bias=True)) # add full connection layer
if batch_norm:
# add batch normalization layer if needed
self.fc.add_module("swnn_batc" + str(count), nn.BatchNorm1d(size))
self.fc.add_module("swnn_acti" + str(count), self.activate_func) # add activate function
self.fc.add_module("swnn_drop" + str(count),
nn.Dropout(self.drop_out)) # add drop_out layer
lastsize = size # update the size of last layer
LinearNetwork(lastsize, size, drop_out, activate_func, batch_norm))
lastsize = size
count += 1
self.fc.add_module("full" + str(count),
nn.Linear(lastsize, self.outsize)) # add the last full connection layer
for m in self.modules():
if isinstance(m, nn.Linear):
torch.nn.init.kaiming_uniform_(m.weight, a=0, mode='fan_in')
if m.bias is not None:
m.bias.data.fill_(0)

LinearNetwork(lastsize, self.outsize))
def forward(self, x):
x.to(torch.float32)
x = x.to(torch.float32)
x = self.fc(x)
return x

Expand Down Expand Up @@ -129,27 +180,15 @@ def __init__(self, dense_layer, insize, outsize, drop_out=0.2, activate_func=nn.
self.fc = nn.Sequential()
for size in self.dense_layer:
self.fc.add_module("stpnn_full" + str(count),
nn.Linear(lastsize, size)) # add full connection layer
if batch_norm:
# add batch normalization layer if needed
self.fc.add_module("stpnn_batc" + str(count), nn.BatchNorm1d(size))
self.fc.add_module("stpnn_acti" + str(count), self.activate_func) # add activate function
self.fc.add_module("stpnn_drop" + str(count),
nn.Dropout(self.drop_out)) # add drop_out layer
lastsize = size # update the size of last layer
LinearNetwork(lastsize, size, drop_out, activate_func, batch_norm))
lastsize = size
count += 1
self.fc.add_module("full" + str(count), nn.Linear(lastsize, self.outsize)) # add the last full connection layer
self.fc.add_module("acti" + str(count), nn.ReLU())

for m in self.modules():
if isinstance(m, nn.Linear):
torch.nn.init.kaiming_uniform_(m.weight)
if m.bias is not None:
m.bias.data.fill_(0)
self.fc.add_module("full" + str(count),
LinearNetwork(lastsize, self.outsize,activate_func=activate_func))

def forward(self, x):
# STPNN
x.to(torch.float32)
x = x.to(torch.float32)
batch = x.shape[0]
height = x.shape[1]
x = torch.reshape(x, shape=(batch * height, x.shape[2]))
Expand Down Expand Up @@ -196,32 +235,4 @@ def forward(self, input1):
STNN_output = self.STNN(STNN_input)
SPNN_output = self.SPNN(SPNN_input)
output = torch.cat((STNN_output, SPNN_output), dim=-1)
return output


# 权共享计算
def weight_share(model, x, output_size=1):
"""
weight_share is a function to calculate the output of neural network with weight sharing.
Parameters
----------
model: torch.nn.Module
neural network with weight sharing
x: torch.Tensor
input of neural network
output_size: int
output size of neural network
Returns
-------
output: torch.Tensor
output of neural network
"""
x.to(torch.float32)
batch = x.shape[0]
height = x.shape[1]
x = torch.reshape(x, shape=(batch * height, x.shape[2]))
output = model(x)
output = torch.reshape(output, shape=(batch, height, output_size))
return output
return output

0 comments on commit de3eb76

Please sign in to comment.