tinybig.model
This module provides the deep RPN model build by stacking multiple RPN layers on top of each other.
Deep RPN model
The multi-head multi-channel RPN layer provides RPN with greater capabilities for approximating functions with diverse expansions concurrently. However, such shallow architectures can be insufficient for modeling complex functions. The RPN model can also be designed with a deep architecture by stacking multiple RPN layers on top of each other.
Formally, we can represent the deep RPN model with multi-layers as follows:
\[ \begin{equation} \begin{cases} \text{Input: } & \mathbf{H}_0 = \mathbf{X},\\\\ \text{Layer 1: } & \mathbf{H}_1 = \left\langle \kappa_{\xi, 1}(\mathbf{H}_0), \psi_1(\mathbf{w}_1) \right\rangle + \pi_1(\mathbf{H}_0),\\\\ \text{Layer 2: } & \mathbf{H}_2 = \left\langle \kappa_{\xi, 2}(\mathbf{H}_1), \psi_2(\mathbf{w}_2) \right\rangle + \pi_2(\mathbf{H}_1),\\\\ \cdots & \cdots \ \cdots\\\\ \text{Layer K: } & \mathbf{H}_K = \left\langle \kappa_{\xi, K}(\mathbf{H}_{K-1}), \psi_K(\mathbf{w}_K) \right\rangle + \pi_K(\mathbf{H}_{K-1}),\\\\ \text{Output: } & \mathbf{Z} = \mathbf{H}_K. \end{cases} \end{equation} \]
In the above equation, the subscripts used above denote the layer index. The dimensions of the outputs at each layer can be represented as a list \([d_0, d_1, \cdots, d_{K-1}, d_K]\), where \(d_0 = m\) and \(d_K = n\) denote the input and the desired output dimensions, respectively. Therefore, if the component functions at each layer of our model have been predetermined, we can just use the dimension list \([d_0, d_1, \cdots, d_{K-1}, d_K]\) to represent the architecture of the RPN model.
Classes in this Module
This module contains the following categories of RPN models:
- Base Model Template
- Deep RPN Model
- Classic Machine Learning Models: RPN_SVM, RPN_PGM, RPN_Naive_Bayes
- Neural Network Models: RPN_MLP, RPN_KAN
- Vision Models: RPN_CNN
- Sequential Models: RPN_RNN, RPN_Regression_RNN
- Graph Models: RPN_GCN, RPN_GAT
- Transformer Models: RPN_Transformer