Hi Kelman David, Thank you so much for putting your genuine concern I understand your logic and discomfort.
- Yes, the problems you’re running into with Oracle NUMBER fields and Parquet output are tied specifically to the newer v2.0 Oracle connector in Azure Data Factory (ADF). The older v1.0 connector was more forgiving when it came to handling Oracle's flexible NUMBER data types. It often allowed the data to flow through without requiring strict precision or casting, even when writing to Parquet.
- With v2.0, Microsoft has introduced stricter type enforcement. This means that when a column doesn't have a clearly defined precision or scale in Oracle, it can trigger errors during the write to Parquet—because Parquet requires that decimals conform to a fixed precision (maximum 38 digits). These stricter checks are part of broader changes aimed at improving data integrity and compatibility with secure standards like TLS 1.3. So yes, these issues you're now seeing didn’t exist with v1.0 and are new to v2.0.
Why would Microsoft release a connector that breaks functionality?
That’s a valid concern, and many teams have asked the same. Microsoft’s goal with v2.0 wasn’t to break things it was to modernize the connector by aligning it with newer platform and security standards. This includes better performance, improved handling of secure connections, and a more consistent mapping of data types. Unfortunately, this also meant tightening the rules around how data types like NUMBER are handled, which has introduced breaking changes for existing pipelines—especially those that rely on implicit conversions when writing to formats like Parquet.
Could you clarify for me please, with the v1.0 connector being deprecated
Microsoft has officially announced that Oracle connector v1.0 will no longer receive feature updates after July 31, 2025, and will be fully unsupported by October 31, 2025. After this point, the connector is not just unsupported it may actually be removed entirely, meaning your pipelines that depend on it could fail or be blocked from running. 
So, “deprecation” here means: No more updates or fixes after July 2025.No guarantee it will keep working after October 2025. Microsoft might remove it completely, especially if security issues arise.
Since manually rewriting SQL queries to cast every NUMBER column isn’t realistic for 150+ tables, here are some practical alternatives:
- Use a more flexible file format temporarily: Instead of writing directly to Parquet, consider using Avro or even CSV as a staging format. These formats are more tolerant of Oracle’s flexible number types and can later be converted to Parquet with a dedicated transformation step (e.g., in Synapse, Data Flow, or Databricks).
- Automate the casting logic: You could query Oracle’s metadata (ALL_TAB_COLUMNS) to identify columns with NUMBER types that lack precision, and automatically inject CAST(... AS NUMBER(18,0)) into your generated queries. This can be scripted using Python or another tool integrated with your pipeline config logic.
Sharing the link for [feedback] so that it will get highlighted in Microsoft forum.
Please do not forget to click "Accept the Answer” and Yes
wherever the information provided helps you, this can be beneficial to other community members.
If you have any other questions or still running into more issues, let me know in the "comments" and I would be happy to help you.