Despite OpenMP being the defacto standard for parallel programming on shared memory system, little is known on how its schedule type and chunk size effect the parallel performance of shared memory multicore processor. Performance analysis in the literature have overlooked the effects of different schedule type and chunk size, possibly it was simply not the focus of their research. Often, the researchers did not specify the schedule type explicitly. This has resulted in the default way of assigning the loop iterations among threads. By default, the static schedule is used and the size of chunk which is the ratio of total number of iterations to the number of threads is implemented. Contrary to above, this research proposes a guideline to select the appropriate schedule type and chunk size for achieving optimum performance on different shared memory multicore platform for balanced and imbalance workload. Three multicore technology namely Intel Core i5-2410M, AMD A12-9700P and ARM Cortex-A53 are used for this work. The speedup obtained after turning on/off certain multicore technologies and a selected number of active cores per processor is analyzed. The result of analysis enables the user to justify and exercise trade-offs in selecting OpenMP schedule type and chunk size, and also in choosing the multicore technologies to meet the desired performance gain. Results analyzed over various configurations of multicore platform and workload suggested that under certain constraint different schedule types and chunk sizes have led to better speedup.