Hello, I directly used unmodified code for training and testing. I found that during model testing, the test loss became increasingly large. When epoch reached 6, the test loss reached 6000, while when epoch was 1, the test loss was only 30+. Is there an overfitting phenomenon? What is the reason for this phenomenon? I would greatly appreciate it if you could provide me with an answer
Hello, I directly used unmodified code for training and testing. I found that during model testing, the test loss became increasingly large. When epoch reached 6, the test loss reached 6000, while when epoch was 1, the test loss was only 30+. Is there an overfitting phenomenon? What is the reason for this phenomenon? I would greatly appreciate it if you could provide me with an answer