#Amazon Web Services#AWS#ubuntu#인공 신경망 최적화#활성화 함수#Loss Function#tomcat8#Xshell#ssh#로컬에서 접속하기#준지도 학습#Adversarial-based#Network-based#Mapping-based#Instances-based#전이 학습#모델 튜닝을 위한 10가지 방법#평가 기준 정의#가중 손실 함수#샘플링 기법#데이터 증식#Data Imbalance#데이터 불균형#Noise Injection#Batch Normlization#DropConnect#L1 L2 Regularization#Sparse Autoencoder#Development of RNNs#순환 신경망의 발전#Vanilla RNN의 문제점#Gated Recurrent Unit#Long Short-Term Memory models#Recurrent Neural Networks#Convexity#볼록성#기울기 사라짐#가변 학습률#모멘텀 기법#Stochastic Gradient Descent#인공 신겨망 최적화#Descent Method#하강법#RMSE#Binary Cross Entropy Function#Cross Entropy Function#Classification Function#Regression Function#XOR문제#ELU Function#Leaky ReLU Function#ReLU Function#tanh Function#하이퍼블릭 탄젠트 함수#Linear Function#선형함수#FinBERT-QA#인프런 강의#Sentence-BERT#SentenceKoBART#신경검색엔진#Neural Search Engine#LegalQA#Denoising AutoEncoder#stacked autoencoder#경사 하강법#확률적 경사 하강법#AI공부#RMSProp#Transfer Learning#BERT#cross validation#Data Augmentation#attention mechanism#Convolutional Autoencoder#Early Stopping#손실함수#오토인코더#비지도 학습#지도 학습#Vanishing Gradient#과적합#손실 함수#주피터 노트북#seq2seq#ReLU#강화 학습#autoencoder#시그모이드 함수#overfitting#jupyter notebook#rnn#순환신경망#lstm#gradient descent#MLP#은닉층#딥러닝#Sigmoid Function#JAVA설치#Jina#gradient#dropout#MSE#EC2#우분투 서버#SGD#GRU#인스턴스 생성#객체지향프로그래밍#ADAM#Mae#GPT#다형성#Momentum#Encoder#Decoder#web application server#transformer#캡슐화#추상화#web application#Optimization#인공지능#OOP#상속#초보#최적화#배포#tomcat#미분#WAS#apache2#스케줄링#연동#실패#검색엔진#Java#MySQL

+ Recent posts