I have some complex SQL statements that I'm executing from Python with pyodbc. For now, I'd prefer to do it this way instead of trying to refactor the code to use an ORM. The code inserts records with a loop. Here is a simplified example.
--Test.sql USE TESTDB DROP TABLE IF EXISTS TestTable CREATE TABLE TestTable(TheValue INT) DECLARE @i INT = 1 WHILE @i < 501 BEGIN INSERT INTO TestTable(TheValue) SELECT @i SET @i = @i + 1 END Here is the simplified version of how I'm calling it.
fd = open(Test.sql, 'r') sql_file = fd.read() fd.close() sqlCommands = sql_file.split(';') connection_string = 'DRIVER={SQL Server Native Client 11.0};' \ 'Server=SERVERNAME;' \ 'Trusted_Connection=yes;' \ 'database=master' for command in sqlCommands: try: conn = db.connect(connection_string) csr = conn.cursor() csr.execute(command) conn.commit() except Exception as e: print(e) finally: csr.close() conn.close() When I execute the code with SSMS, I get 500 records like I expect. When I execute the code with Python, I get a bizarrely random 37 records.
Is this some sort of limitation because I'm using SQL Server Developer Edition? If it is, I can't find any documentation around the limit.
Even if I did refactor it to Python to use an ORM, or even something as simple as just a dataframe, would that fix the issue?
SET NOCOUNT ON;to the start of your batch. I could reproduce this (though with a different number of rowsINSERTed). But, as mentioned, don't use a loop. If you want the numbers 1-500 use a Tally.sql_file.split(';')seems wrong, there are any number of reason why multiple statements should be in the same batch, especially if you have complex procedural code. As stated, you should really use a tally table or a numbers function, see sqlperformance.com/2021/01/t-sql-queries/…