Implement a 'valid' 1D convolution (no padding, stride =
-
over a numeric input array using a given kernel and scalar bias. Do not reverse the kernel (use cross-correlation semantics). Return an output array of length n − k + 1 where output[i] = sum_{j=0}^{k-1} input[i + j] * kernel[j] + bias. Handle floating-point values, and discuss time and space complexity.