Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Deep Neural Networks (DNNs) have shown great success in many fields. Various network architectures have been developed for different applications. Regardless of the complexities of the networks, DNNs do not provide model uncertainty. Bayesian Neural Networks (BNNs), on the other hand, is able to make probabilistic inference. Among various types of BNNs, Dropout as a Bayesian Approximation converts a Neural Network (NN) to a BNN by adding a dropout layer after each weight layer in the NN. This technique provides a simple transformation from a NN to a BNN. However, for DNNs, adding a dropout layer to each weight layer would lead to a strong regularization due to the deep architecture. Previous researches [1, 2, 3] have shown that adding a dropout layer after each weight layer in a DNN is unnecessary. However, how to place dropout layers in a ResNet for regression tasks are less explored. In this work, we perform an empirical study on how different dropout placements would affect the performance of a Bayesian DNN. We use a regression model modified from ResNet as the DNN and place the dropout layers at different places in the regression ResNet. Our experimental results show that it is not necessary to add a dropout layer after every weight layer in the Regression ResNet to let it be able to make Bayesian Inference. Placing Dropout layers between the stacked blocks i.e. Dense+Identity+Identity blocks has the best performance in Predictive Interval Coverage Probability (PICP). Placing a dropout layer after each stacked block has the best performance in Root Mean Square Error (RMSE).
2
EN
Instantaneous phase is a commonly used attribute for structural and stratigraphic feature characterization. The conventional calculation method is to construct the complex-valued seismic trace, then get the ratio of the imaginary part to the real part and fnally compute the antitangent of the ratio as the instantaneous phase attribute. In this way, the phase result at one time sample point is the total phase rotation from the beginning of the trace to this point, which means the traditional instantaneous phase is cumulative. Furthermore, the phase obtained by arctangent is usually entangled, which makes it more difcult to apply to seismic interpretation. To address the two issues above, we proposed a new way to calculate the improved local phase variation attributes. Firstly, we calculate traditional instantaneous phase and unwrap it. Then we set a time window on the unwrapped phase to compute the local phase variation by using some diference methods. Finally, we slide the time window on the whole trace to obtain the fnal phase variation attributes. This strategy turns the whole cumulative value into local variational value, which makes the obtained local phase variation nearly zero in the continuous region but changed greatly at the interface or the abnormal structure areas. Tested by the numerical model and the real data, the proposed attributes have a good application efect in channel detection, which provides a train of thought to seismic structure interpretation with phase attributes.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.