UCLA Researcher Develops a Python Library Called ClimateLearn for Accessing State-of-the-Art Climate Data and Machine Learning Models in a Standardized and Straightforward Way - MarkTechPost



Extreme weather conditions have become a typical occurrence, especially in recent years. Climate change is the main factor to blame for such extreme weather-related phenomena, from the torrential downpours seen in Pakistan that have submerged large portions of the country under water to the exceptional heat waves that have fueled wildfires throughout Portugal and Spain. The Earth’s average surface temperature is predicted to rise by about four degrees during the next decade if the proper actions are not taken soon. According to scientists, this temperature rise will further contribute to the occurrence of more frequent extreme weather events.

General circulation models (GCMs) are tools that scientists use to forecast the weather and climate in the future. GCMs are a system of differential equations that can be integrated across time to produce forecasts for various variables, including temperature, wind speed, precipitation, etc. These models are very simple to comprehend and produce appreciably accurate results. However, the core problem with these models is that executing the simulations requires significant computational power. Additionally, fine-tuning the models gets difficult when there is a lot of training data.

This is where machine learning techniques are proven to be useful. Particularly in “weather forecasting” and “spatial downscaling,” these algorithms have proven to be competitive with more established climate models. Weather forecasting refers to anticipating future climate variables. For instance, we must forecast the amount of rainfall for the upcoming week in Meghalaya using the information on the daily rainfall (in cm) for the previous week. The issue of downscaling spatially coarse climate model projections, for instance, from a grid of 100 km x 100 km to 1 km x 1 km, is known as spatial downscaling.

Forecasting and downscaling can be analogous to a variety of computer vision tasks. However, the main distinction in weather forecasting, spatial downscaling, and other CV tasks is that the machine learning model needs to utilize exogenous inputs in various modalities. For instance, several elements, like humidity and wind speed, along with historical surface temperatures, will have an impact on future surface temperatures. These variables must be provided as inputs to the model, along with surface temperatures.

Deep learning research has exploded in recent years, and scientists studying machine learning and climate change are now looking into how deep learning techniques might address weather forecasting and spatial downscaling issues. When it comes to applying machine learning, the two take contrasting approaches. Scientists studying machine learning place more emphasis on what architectures are best suited for what problems and how to process data in a way that is well suited to modern machine learning methods, whereas climate scientists make more use of physical equations and keep in mind the necessary evaluation metrics.

However, ambiguous language (“bias” in climate modeling versus “bias” in machine learning), a lack of standardization in the application of machine learning for climate science challenges, and a lack of expertise in the analysis of climate data have hindered their ability to unlock their full potential. To address these issues, researchers at the University of California, Los Angeles (UCLA) have developed ClimateLearn, a Python package that enables easy, standardized access to enormous climate data and cutting-edge machine-learning models. A variety of datasets, state-of-the-art baseline models, and a set of metrics and visualizations are all accessible through the package, which enables large-scale benchmarking of weather forecasting and spatial downscaling techniques.

ClimateLearn delivers data in a format that current deep learning architectures can easily utilize. The package includes data from ERA5, the fifth-generation reanalysis of historical global climate, and meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF). A reanalysis dataset uses modeling and data assimilation techniques to merge historical data into global estimations. By virtue of this combination of real data and modeling, reanalysis solutions can have entire global data with reasonable accuracy. ClimateLearn also supports preprocessed ERA5 data from WeatherBench, a benchmark dataset for data-driven weather forecasting, in addition to the raw ERA5 data.

The baseline models implemented in ClimateLearn are well-tuned for the climate tasks and can even be easily extended for other downstream pipelines in climate science. Simple statistical techniques like linear regression, persistence, and climatology are just a few examples of the range of standard machine learning algorithms supported by ClimateLearn. More sophisticated deep learning algorithms like residual convolutional neural networks, U-nets, and vision transformers are also available. The package also provides support for quickly visualizing model predictions using metrics like (latitude-weighted) root mean squared error, anomaly correlation coefficient, and Pearson’s correlation coefficient. Additionally, ClimateLearn provides the visualization of model predictions, ground truth, and the discrepancy between the two.

Researchers’ primary goal in developing ClimateLearn was to close the gap between the communities of climate science and machine learning by making climate datasets easily accessible, providing baseline models for easy comparison, and visualization metrics to comprehend the model outputs. In the near future, the researchers intend to add support for new datasets, like CMIP6 (the sixth generation Climate Modeling Intercomparison Project). The team will also support probabilistic forecasting with new uncertainty quantification metrics and several machine learning methods like Bayesian neural networks and diffusion models. The additional opportunities that machine learning researchers can open up by knowing more about model performance, expressiveness, and robustness have the researchers incredibly enthusiastic. Additionally, climate scientists will be able to comprehend how altering the values of the input variables will change the distributions of the results. The team also plans on making the package open-source and looks forward to all the community’s contributions.

Check out the Tool, Colab, and Blog. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our Reddit Page, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

The rapid increase of computational power and accessibility of computations have enabled a wide range of applications in computer vision and graphics. As a result, it is now possible to perform complex tasks like object detection, facial recognition, and 3D reconstruction in a short amount of time. Especially in the 3D domain, advancements in computer vision and graphics have allowed for the development of computer-based games, proof-of-concept 3D movies and animation, and options for virtual and augmented reality experiences. Furthermore, many applications in computer vision and graphics are close to being or have already been addressed with the help of deep learning and artificial intelligence.

These methods are based on artificial neural networks, which are used to learn complex patterns in data. Deep learning networks are hierarchical, meaning they are composed of multiple layers, with each layer learning a certain pattern. The learning process can be either supervised, meaning that labeled data is used to train the model, or unsupervised, which means that no labeled data is given for the training process. Once trained, the model can make predictions about data it has not seen before. In this sense, prediction is not strictly limited to the definition of its term. It relates to a large number of operations like object detection, object/entity classification, multimedia generation, point cloud compression, and much more.

Using these neural networks to address problems in the 3D domain can be tricky, as it requires more computational power and attention than in the 2D domain. One important task is related to 3D editing and the human interpretability of geometric parameters.

Easing the 3D editing or customization process can be important for gaming or computer graphics applications. People interested in gaming probably know the detail of the customization that some editors can provide while creating a personalized avatar in games, from sport to action. Have you ever wondered how much time it takes to set up all these characteristics on the developer’s side? Defining all those characteristics can take weeks or, worst case, months.

Good news comes from research work presented in this article which shines a light on this problem and proposes a solution to automatize this process.

The proposed framework is depicted in the figure below.

The objective is to recover an editable 3D mesh from an input item represented as a 3D point cloud or a 2D sketch picture. To do this, the authors create procedural software that enforces a set of form constraints and is parameterized by controls that are easy for humans to understand. After teaching a neural network to infer the program parameters, they can generate and recover an editable 3D shape by running the program. This application has straightforward controls in addition to structural data, leading to consistent semantic portion segmentation by building.

Specifically, the program supports three parameters: discrete, binary, and continuous. The disentanglement of the shape parameters guarantees accurate control over the object characteristics. For instance, we can isolate the seat’s shape from the other parts of a chair. Hence, modifying the seat will not impact the geometry of the remaining parameters, such as the backrest or the legs.

To obtain editing flexibility, mesh primitives such as spheres or planes are created and modified according to the user’s needs. Two curves guide the generation of the final shape: a one-dimensional curve describing a path in the 3D space, and a two-dimensional curve, representing the profile of the shape.

Defining curves in this way enables a rich variety of combinations, specified not only by the curves themselves but also by the attachment points, which are the points at which two curves are connected to each other. These points can be defined by a scalar floating value from 0 to 1, where 0 represents the beginning, and 1 is the end of the curve.

Before feeding the parameters to the program for the final 3D shape recovery, an encoder-decoder network architecture is exploited to map a point cloud or sketch input to the parameter representation.

The encoder embeds the input into a global feature vector. Then, the vector embeddings are fed to a set of decoders, each with the scope of translating the input into a single parameter (disentanglement).

GeoCode can be used for various editing tasks, such as interpolation between shapes. An example is shown in the figure below.

This was the summary of GeoCode, a novel AI framework to address the 3D shape synthesis problem. If you are interested, you can find more information in the links below.

Check out the Paper, Github, and Project. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our Reddit Page, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.