I a a high school student and realised I just improved the PNSR score of Uformer-B on the SIDD de-noising and Gopro de-blurring datasets by about 1db. The same change to training dehazeformer. At least from the first 15 epochs of each training (I am planning on renting an a100 for full training soon).
Is it worth publishing after I have fully trained it or should I further try and further improve it? Also, what does the process of actually publishing look like?
I have reached out to local universities but I was ghosted.
Publish on arxiv. It’s free and they have really well documented guidelines. Plenty of incredibly well respected ML experts publish there.
Would the results I have found be significant enough to publish yet?
In my opinion writing a paper is good practice no matter the results. It might help you discern more valuable insights from your testing or approach.
In this situation, you have almost nothing to lose! I say go for it. Do both. Start a paper draft now and iterate upon it as you benchmark more results. Often times writing and reflecting on your own research reinforces some of the concepts you’re tackling. All the more reason to write something up, even if you don’t release it.
If you do end up writing one, be sure to share it here!
It’s best to confirm performance improvements in real-world scenarios in addition to testing. If fully training and deploying the model for use against additional datasets to confirm its efficacy is doable, your results will be better received.